00:00:00.000 Started by upstream project "autotest-per-patch" build number 127143 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24289 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.562 The recommended git tool is: git 00:00:01.562 using credential 00000000-0000-0000-0000-000000000002 00:00:01.564 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.577 Fetching changes from the remote Git repository 00:00:01.579 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.593 Using shallow fetch with depth 1 00:00:01.593 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.593 > git --version # timeout=10 00:00:01.606 > git --version # 'git version 2.39.2' 00:00:01.606 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.618 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.618 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/10/24310/5 # timeout=5 00:00:07.247 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.261 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.276 Checking out Revision 571d49b51a09ef9417806101d0b05bbb896ef7c3 (FETCH_HEAD) 00:00:07.276 > git config core.sparsecheckout # timeout=10 00:00:07.290 > git read-tree -mu HEAD # timeout=10 00:00:07.309 > git checkout -f 571d49b51a09ef9417806101d0b05bbb896ef7c3 # timeout=5 00:00:07.333 Commit message: "jenkins/autotest: remove redundant RAID flags" 00:00:07.333 > git rev-list --no-walk 178f233a2a13202f6c9967830fd93e30560100d5 # timeout=10 00:00:07.430 [Pipeline] Start of Pipeline 00:00:07.441 [Pipeline] library 00:00:07.443 Loading library shm_lib@master 00:00:07.443 Library shm_lib@master is cached. Copying from home. 00:00:07.460 [Pipeline] node 00:00:07.475 Running on WFP6 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:07.476 [Pipeline] { 00:00:07.485 [Pipeline] catchError 00:00:07.486 [Pipeline] { 00:00:07.498 [Pipeline] wrap 00:00:07.507 [Pipeline] { 00:00:07.513 [Pipeline] stage 00:00:07.515 [Pipeline] { (Prologue) 00:00:07.692 [Pipeline] sh 00:00:07.977 + logger -p user.info -t JENKINS-CI 00:00:07.996 [Pipeline] echo 00:00:07.998 Node: WFP6 00:00:08.004 [Pipeline] sh 00:00:08.305 [Pipeline] setCustomBuildProperty 00:00:08.315 [Pipeline] echo 00:00:08.316 Cleanup processes 00:00:08.320 [Pipeline] sh 00:00:08.598 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.598 2274545 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.611 [Pipeline] sh 00:00:08.894 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.894 ++ grep -v 'sudo pgrep' 00:00:08.894 ++ awk '{print $1}' 00:00:08.894 + sudo kill -9 00:00:08.894 + true 00:00:08.909 [Pipeline] cleanWs 00:00:08.919 [WS-CLEANUP] Deleting project workspace... 00:00:08.919 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.927 [WS-CLEANUP] done 00:00:08.931 [Pipeline] setCustomBuildProperty 00:00:08.948 [Pipeline] sh 00:00:09.230 + sudo git config --global --replace-all safe.directory '*' 00:00:09.296 [Pipeline] httpRequest 00:00:09.323 [Pipeline] echo 00:00:09.324 Sorcerer 10.211.164.101 is alive 00:00:09.330 [Pipeline] httpRequest 00:00:09.335 HttpMethod: GET 00:00:09.335 URL: http://10.211.164.101/packages/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:09.336 Sending request to url: http://10.211.164.101/packages/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:09.343 Response Code: HTTP/1.1 200 OK 00:00:09.344 Success: Status code 200 is in the accepted range: 200,404 00:00:09.344 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:16.119 [Pipeline] sh 00:00:16.425 + tar --no-same-owner -xf jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:16.441 [Pipeline] httpRequest 00:00:16.462 [Pipeline] echo 00:00:16.464 Sorcerer 10.211.164.101 is alive 00:00:16.473 [Pipeline] httpRequest 00:00:16.478 HttpMethod: GET 00:00:16.479 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:16.479 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:16.486 Response Code: HTTP/1.1 200 OK 00:00:16.487 Success: Status code 200 is in the accepted range: 200,404 00:00:16.487 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:42.121 [Pipeline] sh 00:01:42.406 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:44.951 [Pipeline] sh 00:01:45.232 + git -C spdk log --oneline -n5 00:01:45.233 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:45.233 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:45.233 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:45.233 d005e023b raid: fix empty slot not updated in sb after resize 00:01:45.233 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:45.245 [Pipeline] } 00:01:45.260 [Pipeline] // stage 00:01:45.269 [Pipeline] stage 00:01:45.271 [Pipeline] { (Prepare) 00:01:45.290 [Pipeline] writeFile 00:01:45.306 [Pipeline] sh 00:01:45.590 + logger -p user.info -t JENKINS-CI 00:01:45.602 [Pipeline] sh 00:01:45.883 + logger -p user.info -t JENKINS-CI 00:01:45.894 [Pipeline] sh 00:01:46.178 + cat autorun-spdk.conf 00:01:46.178 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.178 SPDK_TEST_NVMF=1 00:01:46.178 SPDK_TEST_NVME_CLI=1 00:01:46.178 SPDK_TEST_NVMF_NICS=mlx5 00:01:46.178 SPDK_RUN_UBSAN=1 00:01:46.178 NET_TYPE=phy 00:01:46.184 RUN_NIGHTLY=0 00:01:46.189 [Pipeline] readFile 00:01:46.214 [Pipeline] withEnv 00:01:46.217 [Pipeline] { 00:01:46.230 [Pipeline] sh 00:01:46.514 + set -ex 00:01:46.514 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:46.514 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:46.514 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:46.514 ++ SPDK_TEST_NVMF=1 00:01:46.514 ++ SPDK_TEST_NVME_CLI=1 00:01:46.514 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:46.514 ++ SPDK_RUN_UBSAN=1 00:01:46.514 ++ NET_TYPE=phy 00:01:46.514 ++ RUN_NIGHTLY=0 00:01:46.514 + case $SPDK_TEST_NVMF_NICS in 00:01:46.514 + DRIVERS=mlx5_ib 00:01:46.514 + [[ -n mlx5_ib ]] 00:01:46.514 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:46.514 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:53.083 rmmod: ERROR: Module irdma is not currently loaded 00:01:53.083 rmmod: ERROR: Module i40iw is not currently loaded 00:01:53.083 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:53.083 + true 00:01:53.083 + for D in $DRIVERS 00:01:53.083 + sudo modprobe mlx5_ib 00:01:53.083 + exit 0 00:01:53.092 [Pipeline] } 00:01:53.111 [Pipeline] // withEnv 00:01:53.116 [Pipeline] } 00:01:53.135 [Pipeline] // stage 00:01:53.145 [Pipeline] catchError 00:01:53.147 [Pipeline] { 00:01:53.162 [Pipeline] timeout 00:01:53.163 Timeout set to expire in 1 hr 0 min 00:01:53.165 [Pipeline] { 00:01:53.181 [Pipeline] stage 00:01:53.183 [Pipeline] { (Tests) 00:01:53.198 [Pipeline] sh 00:01:53.482 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:53.482 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:53.482 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:53.482 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:53.482 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:53.482 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:53.482 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:53.482 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:53.482 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:53.482 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:53.482 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:53.482 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:53.482 + source /etc/os-release 00:01:53.482 ++ NAME='Fedora Linux' 00:01:53.482 ++ VERSION='38 (Cloud Edition)' 00:01:53.482 ++ ID=fedora 00:01:53.482 ++ VERSION_ID=38 00:01:53.482 ++ VERSION_CODENAME= 00:01:53.482 ++ PLATFORM_ID=platform:f38 00:01:53.482 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:53.482 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:53.482 ++ LOGO=fedora-logo-icon 00:01:53.482 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:53.482 ++ HOME_URL=https://fedoraproject.org/ 00:01:53.482 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:53.482 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:53.482 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:53.482 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:53.482 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:53.482 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:53.482 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:53.482 ++ SUPPORT_END=2024-05-14 00:01:53.482 ++ VARIANT='Cloud Edition' 00:01:53.482 ++ VARIANT_ID=cloud 00:01:53.482 + uname -a 00:01:53.482 Linux spdk-wfp-06 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:53.482 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:56.034 Hugepages 00:01:56.034 node hugesize free / total 00:01:56.034 node0 1048576kB 0 / 0 00:01:56.034 node0 2048kB 0 / 0 00:01:56.034 node1 1048576kB 0 / 0 00:01:56.034 node1 2048kB 0 / 0 00:01:56.034 00:01:56.034 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:56.034 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:56.034 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:56.034 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:56.034 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:56.034 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:56.034 + rm -f /tmp/spdk-ld-path 00:01:56.034 + source autorun-spdk.conf 00:01:56.034 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.034 ++ SPDK_TEST_NVMF=1 00:01:56.034 ++ SPDK_TEST_NVME_CLI=1 00:01:56.034 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:56.034 ++ SPDK_RUN_UBSAN=1 00:01:56.034 ++ NET_TYPE=phy 00:01:56.034 ++ RUN_NIGHTLY=0 00:01:56.034 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:56.034 + [[ -n '' ]] 00:01:56.034 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:56.034 + for M in /var/spdk/build-*-manifest.txt 00:01:56.034 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:56.034 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:56.034 + for M in /var/spdk/build-*-manifest.txt 00:01:56.034 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:56.034 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:56.034 ++ uname 00:01:56.034 + [[ Linux == \L\i\n\u\x ]] 00:01:56.034 + sudo dmesg -T 00:01:56.034 + sudo dmesg --clear 00:01:56.034 + dmesg_pid=2275503 00:01:56.034 + [[ Fedora Linux == FreeBSD ]] 00:01:56.034 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.034 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:56.034 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:56.034 + [[ -x /usr/src/fio-static/fio ]] 00:01:56.034 + export FIO_BIN=/usr/src/fio-static/fio 00:01:56.034 + FIO_BIN=/usr/src/fio-static/fio 00:01:56.034 + sudo dmesg -Tw 00:01:56.034 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:56.034 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:56.034 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:56.034 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.034 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:56.034 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:56.034 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.034 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:56.034 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:56.310 Test configuration: 00:01:56.310 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.310 SPDK_TEST_NVMF=1 00:01:56.310 SPDK_TEST_NVME_CLI=1 00:01:56.310 SPDK_TEST_NVMF_NICS=mlx5 00:01:56.310 SPDK_RUN_UBSAN=1 00:01:56.310 NET_TYPE=phy 00:01:56.310 RUN_NIGHTLY=0 09:50:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:56.310 09:50:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:56.310 09:50:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:56.310 09:50:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:56.310 09:50:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.310 09:50:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.310 09:50:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.310 09:50:41 -- paths/export.sh@5 -- $ export PATH 00:01:56.310 09:50:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:56.310 09:50:41 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:56.310 09:50:41 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:56.310 09:50:41 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721893841.XXXXXX 00:01:56.310 09:50:41 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721893841.xO1O7C 00:01:56.310 09:50:41 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:56.310 09:50:41 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:56.310 09:50:41 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:01:56.310 09:50:41 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:56.310 09:50:41 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:56.310 09:50:41 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:56.310 09:50:41 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:56.310 09:50:41 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.310 09:50:41 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:56.310 09:50:41 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:56.310 09:50:41 -- pm/common@17 -- $ local monitor 00:01:56.310 09:50:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.310 09:50:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.310 09:50:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.310 09:50:41 -- pm/common@21 -- $ date +%s 00:01:56.310 09:50:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:56.310 09:50:41 -- pm/common@21 -- $ date +%s 00:01:56.310 09:50:41 -- pm/common@25 -- $ sleep 1 00:01:56.310 09:50:41 -- pm/common@21 -- $ date +%s 00:01:56.310 09:50:41 -- pm/common@21 -- $ date +%s 00:01:56.310 09:50:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893841 00:01:56.310 09:50:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893841 00:01:56.310 09:50:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893841 00:01:56.310 09:50:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893841 00:01:56.310 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893841_collect-vmstat.pm.log 00:01:56.310 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893841_collect-cpu-load.pm.log 00:01:56.310 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893841_collect-cpu-temp.pm.log 00:01:56.310 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893841_collect-bmc-pm.bmc.pm.log 00:01:57.247 09:50:42 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:57.247 09:50:42 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:57.247 09:50:42 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:57.247 09:50:42 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:57.247 09:50:42 -- spdk/autobuild.sh@16 -- $ date -u 00:01:57.247 Thu Jul 25 07:50:42 AM UTC 2024 00:01:57.247 09:50:42 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:57.247 v24.09-pre-321-g704257090 00:01:57.247 09:50:42 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:57.247 09:50:42 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:57.247 09:50:42 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:57.247 09:50:42 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:57.247 09:50:42 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:57.247 09:50:42 -- common/autotest_common.sh@10 -- $ set +x 00:01:57.247 ************************************ 00:01:57.247 START TEST ubsan 00:01:57.247 ************************************ 00:01:57.247 09:50:42 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:57.247 using ubsan 00:01:57.247 00:01:57.247 real 0m0.000s 00:01:57.247 user 0m0.000s 00:01:57.247 sys 0m0.000s 00:01:57.247 09:50:42 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:57.247 09:50:42 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:57.247 ************************************ 00:01:57.247 END TEST ubsan 00:01:57.247 ************************************ 00:01:57.247 09:50:42 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:57.247 09:50:42 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:57.247 09:50:42 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:57.247 09:50:42 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:57.506 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:01:57.506 Using default DPDK in /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:01:57.765 Using 'verbs' RDMA provider 00:02:10.911 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:23.125 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:23.125 Creating mk/config.mk...done. 00:02:23.125 Creating mk/cc.flags.mk...done. 00:02:23.125 Type 'make' to build. 00:02:23.125 09:51:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:02:23.125 09:51:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.125 09:51:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.125 09:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.125 ************************************ 00:02:23.125 START TEST make 00:02:23.125 ************************************ 00:02:23.125 09:51:07 make -- common/autotest_common.sh@1125 -- $ make -j96 00:02:23.125 make[1]: Nothing to be done for 'all'. 00:02:31.246 The Meson build system 00:02:31.246 Version: 1.3.1 00:02:31.246 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk 00:02:31.246 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp 00:02:31.246 Build type: native build 00:02:31.246 Program cat found: YES (/usr/bin/cat) 00:02:31.246 Project name: DPDK 00:02:31.246 Project version: 24.03.0 00:02:31.246 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:31.246 C linker for the host machine: cc ld.bfd 2.39-16 00:02:31.246 Host machine cpu family: x86_64 00:02:31.246 Host machine cpu: x86_64 00:02:31.246 Message: ## Building in Developer Mode ## 00:02:31.246 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:31.246 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:31.246 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:31.246 Program python3 found: YES (/usr/bin/python3) 00:02:31.246 Program cat found: YES (/usr/bin/cat) 00:02:31.246 Compiler for C supports arguments -march=native: YES 00:02:31.246 Checking for size of "void *" : 8 00:02:31.246 Checking for size of "void *" : 8 (cached) 00:02:31.246 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:31.246 Library m found: YES 00:02:31.246 Library numa found: YES 00:02:31.246 Has header "numaif.h" : YES 00:02:31.246 Library fdt found: NO 00:02:31.246 Library execinfo found: NO 00:02:31.246 Has header "execinfo.h" : YES 00:02:31.246 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:31.246 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:31.246 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:31.246 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:31.246 Run-time dependency openssl found: YES 3.0.9 00:02:31.246 Run-time dependency libpcap found: YES 1.10.4 00:02:31.246 Has header "pcap.h" with dependency libpcap: YES 00:02:31.246 Compiler for C supports arguments -Wcast-qual: YES 00:02:31.246 Compiler for C supports arguments -Wdeprecated: YES 00:02:31.246 Compiler for C supports arguments -Wformat: YES 00:02:31.246 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:31.246 Compiler for C supports arguments -Wformat-security: NO 00:02:31.246 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:31.246 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:31.246 Compiler for C supports arguments -Wnested-externs: YES 00:02:31.246 Compiler for C supports arguments -Wold-style-definition: YES 00:02:31.246 Compiler for C supports arguments -Wpointer-arith: YES 00:02:31.246 Compiler for C supports arguments -Wsign-compare: YES 00:02:31.246 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:31.246 Compiler for C supports arguments -Wundef: YES 00:02:31.246 Compiler for C supports arguments -Wwrite-strings: YES 00:02:31.246 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:31.246 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:31.246 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:31.246 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:31.246 Program objdump found: YES (/usr/bin/objdump) 00:02:31.246 Compiler for C supports arguments -mavx512f: YES 00:02:31.246 Checking if "AVX512 checking" compiles: YES 00:02:31.246 Fetching value of define "__SSE4_2__" : 1 00:02:31.246 Fetching value of define "__AES__" : 1 00:02:31.246 Fetching value of define "__AVX__" : 1 00:02:31.246 Fetching value of define "__AVX2__" : 1 00:02:31.246 Fetching value of define "__AVX512BW__" : 1 00:02:31.246 Fetching value of define "__AVX512CD__" : 1 00:02:31.246 Fetching value of define "__AVX512DQ__" : 1 00:02:31.246 Fetching value of define "__AVX512F__" : 1 00:02:31.246 Fetching value of define "__AVX512VL__" : 1 00:02:31.246 Fetching value of define "__PCLMUL__" : 1 00:02:31.246 Fetching value of define "__RDRND__" : 1 00:02:31.246 Fetching value of define "__RDSEED__" : 1 00:02:31.246 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:31.246 Fetching value of define "__znver1__" : (undefined) 00:02:31.246 Fetching value of define "__znver2__" : (undefined) 00:02:31.246 Fetching value of define "__znver3__" : (undefined) 00:02:31.246 Fetching value of define "__znver4__" : (undefined) 00:02:31.246 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:31.246 Message: lib/log: Defining dependency "log" 00:02:31.246 Message: lib/kvargs: Defining dependency "kvargs" 00:02:31.246 Message: lib/telemetry: Defining dependency "telemetry" 00:02:31.246 Checking for function "getentropy" : NO 00:02:31.246 Message: lib/eal: Defining dependency "eal" 00:02:31.246 Message: lib/ring: Defining dependency "ring" 00:02:31.246 Message: lib/rcu: Defining dependency "rcu" 00:02:31.246 Message: lib/mempool: Defining dependency "mempool" 00:02:31.246 Message: lib/mbuf: Defining dependency "mbuf" 00:02:31.246 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:31.246 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:31.246 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:31.246 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:31.246 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:31.246 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:31.246 Compiler for C supports arguments -mpclmul: YES 00:02:31.246 Compiler for C supports arguments -maes: YES 00:02:31.246 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:31.246 Compiler for C supports arguments -mavx512bw: YES 00:02:31.246 Compiler for C supports arguments -mavx512dq: YES 00:02:31.246 Compiler for C supports arguments -mavx512vl: YES 00:02:31.246 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:31.246 Compiler for C supports arguments -mavx2: YES 00:02:31.246 Compiler for C supports arguments -mavx: YES 00:02:31.246 Message: lib/net: Defining dependency "net" 00:02:31.246 Message: lib/meter: Defining dependency "meter" 00:02:31.246 Message: lib/ethdev: Defining dependency "ethdev" 00:02:31.246 Message: lib/pci: Defining dependency "pci" 00:02:31.246 Message: lib/cmdline: Defining dependency "cmdline" 00:02:31.246 Message: lib/hash: Defining dependency "hash" 00:02:31.246 Message: lib/timer: Defining dependency "timer" 00:02:31.246 Message: lib/compressdev: Defining dependency "compressdev" 00:02:31.246 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:31.246 Message: lib/dmadev: Defining dependency "dmadev" 00:02:31.246 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:31.246 Message: lib/power: Defining dependency "power" 00:02:31.246 Message: lib/reorder: Defining dependency "reorder" 00:02:31.246 Message: lib/security: Defining dependency "security" 00:02:31.246 Has header "linux/userfaultfd.h" : YES 00:02:31.246 Has header "linux/vduse.h" : YES 00:02:31.246 Message: lib/vhost: Defining dependency "vhost" 00:02:31.246 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:31.246 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:31.246 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:31.246 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:31.246 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:31.246 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:31.246 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:31.246 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:31.246 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:31.246 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:31.246 Program doxygen found: YES (/usr/bin/doxygen) 00:02:31.246 Configuring doxy-api-html.conf using configuration 00:02:31.246 Configuring doxy-api-man.conf using configuration 00:02:31.246 Program mandb found: YES (/usr/bin/mandb) 00:02:31.246 Program sphinx-build found: NO 00:02:31.246 Configuring rte_build_config.h using configuration 00:02:31.246 Message: 00:02:31.246 ================= 00:02:31.246 Applications Enabled 00:02:31.246 ================= 00:02:31.246 00:02:31.246 apps: 00:02:31.246 00:02:31.246 00:02:31.246 Message: 00:02:31.246 ================= 00:02:31.246 Libraries Enabled 00:02:31.246 ================= 00:02:31.246 00:02:31.246 libs: 00:02:31.246 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:31.246 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:31.246 cryptodev, dmadev, power, reorder, security, vhost, 00:02:31.246 00:02:31.246 Message: 00:02:31.246 =============== 00:02:31.246 Drivers Enabled 00:02:31.246 =============== 00:02:31.246 00:02:31.246 common: 00:02:31.246 00:02:31.246 bus: 00:02:31.246 pci, vdev, 00:02:31.246 mempool: 00:02:31.246 ring, 00:02:31.246 dma: 00:02:31.246 00:02:31.246 net: 00:02:31.246 00:02:31.246 crypto: 00:02:31.246 00:02:31.246 compress: 00:02:31.246 00:02:31.246 vdpa: 00:02:31.246 00:02:31.246 00:02:31.246 Message: 00:02:31.246 ================= 00:02:31.246 Content Skipped 00:02:31.246 ================= 00:02:31.246 00:02:31.246 apps: 00:02:31.246 dumpcap: explicitly disabled via build config 00:02:31.246 graph: explicitly disabled via build config 00:02:31.246 pdump: explicitly disabled via build config 00:02:31.246 proc-info: explicitly disabled via build config 00:02:31.246 test-acl: explicitly disabled via build config 00:02:31.246 test-bbdev: explicitly disabled via build config 00:02:31.246 test-cmdline: explicitly disabled via build config 00:02:31.246 test-compress-perf: explicitly disabled via build config 00:02:31.246 test-crypto-perf: explicitly disabled via build config 00:02:31.246 test-dma-perf: explicitly disabled via build config 00:02:31.246 test-eventdev: explicitly disabled via build config 00:02:31.246 test-fib: explicitly disabled via build config 00:02:31.246 test-flow-perf: explicitly disabled via build config 00:02:31.246 test-gpudev: explicitly disabled via build config 00:02:31.246 test-mldev: explicitly disabled via build config 00:02:31.246 test-pipeline: explicitly disabled via build config 00:02:31.246 test-pmd: explicitly disabled via build config 00:02:31.246 test-regex: explicitly disabled via build config 00:02:31.246 test-sad: explicitly disabled via build config 00:02:31.246 test-security-perf: explicitly disabled via build config 00:02:31.246 00:02:31.246 libs: 00:02:31.246 argparse: explicitly disabled via build config 00:02:31.246 metrics: explicitly disabled via build config 00:02:31.246 acl: explicitly disabled via build config 00:02:31.246 bbdev: explicitly disabled via build config 00:02:31.246 bitratestats: explicitly disabled via build config 00:02:31.246 bpf: explicitly disabled via build config 00:02:31.246 cfgfile: explicitly disabled via build config 00:02:31.246 distributor: explicitly disabled via build config 00:02:31.246 efd: explicitly disabled via build config 00:02:31.246 eventdev: explicitly disabled via build config 00:02:31.246 dispatcher: explicitly disabled via build config 00:02:31.246 gpudev: explicitly disabled via build config 00:02:31.246 gro: explicitly disabled via build config 00:02:31.246 gso: explicitly disabled via build config 00:02:31.246 ip_frag: explicitly disabled via build config 00:02:31.246 jobstats: explicitly disabled via build config 00:02:31.246 latencystats: explicitly disabled via build config 00:02:31.246 lpm: explicitly disabled via build config 00:02:31.246 member: explicitly disabled via build config 00:02:31.246 pcapng: explicitly disabled via build config 00:02:31.246 rawdev: explicitly disabled via build config 00:02:31.246 regexdev: explicitly disabled via build config 00:02:31.246 mldev: explicitly disabled via build config 00:02:31.246 rib: explicitly disabled via build config 00:02:31.246 sched: explicitly disabled via build config 00:02:31.246 stack: explicitly disabled via build config 00:02:31.246 ipsec: explicitly disabled via build config 00:02:31.246 pdcp: explicitly disabled via build config 00:02:31.246 fib: explicitly disabled via build config 00:02:31.246 port: explicitly disabled via build config 00:02:31.246 pdump: explicitly disabled via build config 00:02:31.246 table: explicitly disabled via build config 00:02:31.246 pipeline: explicitly disabled via build config 00:02:31.246 graph: explicitly disabled via build config 00:02:31.246 node: explicitly disabled via build config 00:02:31.246 00:02:31.246 drivers: 00:02:31.246 common/cpt: not in enabled drivers build config 00:02:31.246 common/dpaax: not in enabled drivers build config 00:02:31.246 common/iavf: not in enabled drivers build config 00:02:31.246 common/idpf: not in enabled drivers build config 00:02:31.246 common/ionic: not in enabled drivers build config 00:02:31.246 common/mvep: not in enabled drivers build config 00:02:31.246 common/octeontx: not in enabled drivers build config 00:02:31.246 bus/auxiliary: not in enabled drivers build config 00:02:31.246 bus/cdx: not in enabled drivers build config 00:02:31.246 bus/dpaa: not in enabled drivers build config 00:02:31.246 bus/fslmc: not in enabled drivers build config 00:02:31.246 bus/ifpga: not in enabled drivers build config 00:02:31.246 bus/platform: not in enabled drivers build config 00:02:31.246 bus/uacce: not in enabled drivers build config 00:02:31.246 bus/vmbus: not in enabled drivers build config 00:02:31.246 common/cnxk: not in enabled drivers build config 00:02:31.246 common/mlx5: not in enabled drivers build config 00:02:31.246 common/nfp: not in enabled drivers build config 00:02:31.246 common/nitrox: not in enabled drivers build config 00:02:31.246 common/qat: not in enabled drivers build config 00:02:31.246 common/sfc_efx: not in enabled drivers build config 00:02:31.246 mempool/bucket: not in enabled drivers build config 00:02:31.246 mempool/cnxk: not in enabled drivers build config 00:02:31.246 mempool/dpaa: not in enabled drivers build config 00:02:31.246 mempool/dpaa2: not in enabled drivers build config 00:02:31.246 mempool/octeontx: not in enabled drivers build config 00:02:31.246 mempool/stack: not in enabled drivers build config 00:02:31.246 dma/cnxk: not in enabled drivers build config 00:02:31.246 dma/dpaa: not in enabled drivers build config 00:02:31.246 dma/dpaa2: not in enabled drivers build config 00:02:31.246 dma/hisilicon: not in enabled drivers build config 00:02:31.246 dma/idxd: not in enabled drivers build config 00:02:31.246 dma/ioat: not in enabled drivers build config 00:02:31.246 dma/skeleton: not in enabled drivers build config 00:02:31.246 net/af_packet: not in enabled drivers build config 00:02:31.246 net/af_xdp: not in enabled drivers build config 00:02:31.246 net/ark: not in enabled drivers build config 00:02:31.246 net/atlantic: not in enabled drivers build config 00:02:31.246 net/avp: not in enabled drivers build config 00:02:31.246 net/axgbe: not in enabled drivers build config 00:02:31.246 net/bnx2x: not in enabled drivers build config 00:02:31.246 net/bnxt: not in enabled drivers build config 00:02:31.246 net/bonding: not in enabled drivers build config 00:02:31.246 net/cnxk: not in enabled drivers build config 00:02:31.246 net/cpfl: not in enabled drivers build config 00:02:31.246 net/cxgbe: not in enabled drivers build config 00:02:31.246 net/dpaa: not in enabled drivers build config 00:02:31.246 net/dpaa2: not in enabled drivers build config 00:02:31.246 net/e1000: not in enabled drivers build config 00:02:31.246 net/ena: not in enabled drivers build config 00:02:31.246 net/enetc: not in enabled drivers build config 00:02:31.246 net/enetfec: not in enabled drivers build config 00:02:31.246 net/enic: not in enabled drivers build config 00:02:31.246 net/failsafe: not in enabled drivers build config 00:02:31.246 net/fm10k: not in enabled drivers build config 00:02:31.246 net/gve: not in enabled drivers build config 00:02:31.246 net/hinic: not in enabled drivers build config 00:02:31.246 net/hns3: not in enabled drivers build config 00:02:31.246 net/i40e: not in enabled drivers build config 00:02:31.246 net/iavf: not in enabled drivers build config 00:02:31.246 net/ice: not in enabled drivers build config 00:02:31.246 net/idpf: not in enabled drivers build config 00:02:31.246 net/igc: not in enabled drivers build config 00:02:31.246 net/ionic: not in enabled drivers build config 00:02:31.246 net/ipn3ke: not in enabled drivers build config 00:02:31.246 net/ixgbe: not in enabled drivers build config 00:02:31.246 net/mana: not in enabled drivers build config 00:02:31.247 net/memif: not in enabled drivers build config 00:02:31.247 net/mlx4: not in enabled drivers build config 00:02:31.247 net/mlx5: not in enabled drivers build config 00:02:31.247 net/mvneta: not in enabled drivers build config 00:02:31.247 net/mvpp2: not in enabled drivers build config 00:02:31.247 net/netvsc: not in enabled drivers build config 00:02:31.247 net/nfb: not in enabled drivers build config 00:02:31.247 net/nfp: not in enabled drivers build config 00:02:31.247 net/ngbe: not in enabled drivers build config 00:02:31.247 net/null: not in enabled drivers build config 00:02:31.247 net/octeontx: not in enabled drivers build config 00:02:31.247 net/octeon_ep: not in enabled drivers build config 00:02:31.247 net/pcap: not in enabled drivers build config 00:02:31.247 net/pfe: not in enabled drivers build config 00:02:31.247 net/qede: not in enabled drivers build config 00:02:31.247 net/ring: not in enabled drivers build config 00:02:31.247 net/sfc: not in enabled drivers build config 00:02:31.247 net/softnic: not in enabled drivers build config 00:02:31.247 net/tap: not in enabled drivers build config 00:02:31.247 net/thunderx: not in enabled drivers build config 00:02:31.247 net/txgbe: not in enabled drivers build config 00:02:31.247 net/vdev_netvsc: not in enabled drivers build config 00:02:31.247 net/vhost: not in enabled drivers build config 00:02:31.247 net/virtio: not in enabled drivers build config 00:02:31.247 net/vmxnet3: not in enabled drivers build config 00:02:31.247 raw/*: missing internal dependency, "rawdev" 00:02:31.247 crypto/armv8: not in enabled drivers build config 00:02:31.247 crypto/bcmfs: not in enabled drivers build config 00:02:31.247 crypto/caam_jr: not in enabled drivers build config 00:02:31.247 crypto/ccp: not in enabled drivers build config 00:02:31.247 crypto/cnxk: not in enabled drivers build config 00:02:31.247 crypto/dpaa_sec: not in enabled drivers build config 00:02:31.247 crypto/dpaa2_sec: not in enabled drivers build config 00:02:31.247 crypto/ipsec_mb: not in enabled drivers build config 00:02:31.247 crypto/mlx5: not in enabled drivers build config 00:02:31.247 crypto/mvsam: not in enabled drivers build config 00:02:31.247 crypto/nitrox: not in enabled drivers build config 00:02:31.247 crypto/null: not in enabled drivers build config 00:02:31.247 crypto/octeontx: not in enabled drivers build config 00:02:31.247 crypto/openssl: not in enabled drivers build config 00:02:31.247 crypto/scheduler: not in enabled drivers build config 00:02:31.247 crypto/uadk: not in enabled drivers build config 00:02:31.247 crypto/virtio: not in enabled drivers build config 00:02:31.247 compress/isal: not in enabled drivers build config 00:02:31.247 compress/mlx5: not in enabled drivers build config 00:02:31.247 compress/nitrox: not in enabled drivers build config 00:02:31.247 compress/octeontx: not in enabled drivers build config 00:02:31.247 compress/zlib: not in enabled drivers build config 00:02:31.247 regex/*: missing internal dependency, "regexdev" 00:02:31.247 ml/*: missing internal dependency, "mldev" 00:02:31.247 vdpa/ifc: not in enabled drivers build config 00:02:31.247 vdpa/mlx5: not in enabled drivers build config 00:02:31.247 vdpa/nfp: not in enabled drivers build config 00:02:31.247 vdpa/sfc: not in enabled drivers build config 00:02:31.247 event/*: missing internal dependency, "eventdev" 00:02:31.247 baseband/*: missing internal dependency, "bbdev" 00:02:31.247 gpu/*: missing internal dependency, "gpudev" 00:02:31.247 00:02:31.247 00:02:31.247 Build targets in project: 85 00:02:31.247 00:02:31.247 DPDK 24.03.0 00:02:31.247 00:02:31.247 User defined options 00:02:31.247 buildtype : debug 00:02:31.247 default_library : shared 00:02:31.247 libdir : lib 00:02:31.247 prefix : /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:02:31.247 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:31.247 c_link_args : 00:02:31.247 cpu_instruction_set: native 00:02:31.247 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:31.247 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:31.247 enable_docs : false 00:02:31.247 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:31.247 enable_kmods : false 00:02:31.247 max_lcores : 128 00:02:31.247 tests : false 00:02:31.247 00:02:31.247 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:31.247 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp' 00:02:31.247 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.247 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:31.247 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.247 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:31.247 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.247 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:31.247 [7/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:31.247 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.247 [9/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.247 [10/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:31.247 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:31.247 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.247 [13/268] Linking static target lib/librte_kvargs.a 00:02:31.247 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.247 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.247 [16/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:31.247 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:31.247 [18/268] Linking static target lib/librte_log.a 00:02:31.247 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.247 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:31.505 [21/268] Linking static target lib/librte_pci.a 00:02:31.505 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:31.505 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:31.505 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:31.505 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:31.505 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:31.505 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:31.505 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:31.505 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:31.505 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.764 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:31.764 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.764 [33/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:31.764 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:31.764 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:31.764 [36/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:31.764 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:31.764 [38/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:31.764 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.764 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.764 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.764 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.764 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.764 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:31.764 [45/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.764 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:31.764 [47/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:31.764 [48/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:31.764 [49/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:31.764 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:31.764 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:31.764 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.764 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:31.764 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.764 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.764 [56/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:31.764 [57/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.764 [58/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:31.764 [59/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:31.764 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:31.764 [61/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:31.764 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.764 [63/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:31.764 [64/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:31.764 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.764 [66/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:31.764 [67/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:31.764 [68/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:31.764 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:31.764 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.764 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:31.764 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.764 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.764 [74/268] Linking static target lib/librte_ring.a 00:02:31.764 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:31.764 [76/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.764 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.764 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:31.764 [79/268] Linking static target lib/librte_meter.a 00:02:31.764 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:31.764 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:31.764 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.764 [83/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:31.764 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.764 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.764 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.764 [87/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.764 [88/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.764 [89/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:31.764 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:31.764 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:31.764 [92/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:31.764 [93/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.764 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:31.764 [95/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:31.764 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:31.764 [97/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:31.764 [98/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.764 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.764 [100/268] Linking static target lib/librte_telemetry.a 00:02:31.764 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:31.764 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.764 [103/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:31.764 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:31.764 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:31.764 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:31.764 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:31.764 [108/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:31.764 [109/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:31.764 [110/268] Linking static target lib/librte_mempool.a 00:02:31.764 [111/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:32.022 [112/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:32.022 [113/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:32.022 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:32.022 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.022 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:32.022 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:32.022 [118/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:32.022 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:32.022 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:32.022 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:32.022 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:32.022 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:32.022 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:32.022 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.022 [126/268] Linking static target lib/librte_net.a 00:02:32.022 [127/268] Linking static target lib/librte_rcu.a 00:02:32.022 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:32.022 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:32.022 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:32.022 [131/268] Linking static target lib/librte_eal.a 00:02:32.022 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:32.022 [133/268] Linking static target lib/librte_cmdline.a 00:02:32.022 [134/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:32.022 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:32.022 [136/268] Linking static target lib/librte_mbuf.a 00:02:32.022 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.022 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.022 [139/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.022 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:32.022 [141/268] Linking target lib/librte_log.so.24.1 00:02:32.022 [142/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:32.022 [143/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:32.022 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:32.022 [145/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:32.022 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:32.280 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:32.280 [148/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:32.280 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.280 [150/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:32.280 [151/268] Linking static target lib/librte_timer.a 00:02:32.280 [152/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:32.280 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:32.280 [154/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.280 [155/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:32.280 [156/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.280 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:32.280 [158/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:32.280 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:32.280 [160/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:32.280 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:32.280 [162/268] Linking static target lib/librte_reorder.a 00:02:32.280 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:32.280 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:32.280 [165/268] Linking target lib/librte_telemetry.so.24.1 00:02:32.280 [166/268] Linking target lib/librte_kvargs.so.24.1 00:02:32.280 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:32.280 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:32.280 [169/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:32.280 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:32.280 [171/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:32.280 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:32.280 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:32.280 [174/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:32.280 [175/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:32.280 [176/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:32.280 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:32.280 [178/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:32.280 [179/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:32.280 [180/268] Linking static target lib/librte_dmadev.a 00:02:32.280 [181/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:32.280 [182/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:32.280 [183/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:32.280 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:32.280 [185/268] Linking static target lib/librte_compressdev.a 00:02:32.280 [186/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:32.280 [187/268] Linking static target lib/librte_power.a 00:02:32.280 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:32.539 [189/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:32.539 [190/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.539 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:32.539 [192/268] Linking static target lib/librte_hash.a 00:02:32.539 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:32.539 [194/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:32.539 [195/268] Linking static target lib/librte_security.a 00:02:32.539 [196/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:32.539 [197/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:32.539 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:32.539 [199/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.539 [200/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:32.539 [201/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:32.539 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:32.539 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:32.539 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:32.539 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.539 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:32.539 [207/268] Linking static target lib/librte_cryptodev.a 00:02:32.539 [208/268] Linking static target drivers/librte_bus_pci.a 00:02:32.539 [209/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.539 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.539 [211/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.797 [212/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.797 [213/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:32.797 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.797 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:32.797 [216/268] Linking static target drivers/librte_mempool_ring.a 00:02:32.797 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.056 [218/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:33.056 [219/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.056 [220/268] Linking static target lib/librte_ethdev.a 00:02:33.056 [221/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.056 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.056 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.056 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.314 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.251 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:34.251 [229/268] Linking static target lib/librte_vhost.a 00:02:34.509 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.415 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.696 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.263 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.263 [234/268] Linking target lib/librte_eal.so.24.1 00:02:42.263 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:42.263 [236/268] Linking target lib/librte_timer.so.24.1 00:02:42.263 [237/268] Linking target lib/librte_ring.so.24.1 00:02:42.263 [238/268] Linking target lib/librte_meter.so.24.1 00:02:42.263 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.263 [240/268] Linking target lib/librte_dmadev.so.24.1 00:02:42.263 [241/268] Linking target lib/librte_pci.so.24.1 00:02:42.521 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.521 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.521 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.521 [245/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.521 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.521 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.521 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:42.521 [249/268] Linking target lib/librte_rcu.so.24.1 00:02:42.781 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.781 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.781 [252/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.781 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.781 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:42.781 [255/268] Linking target lib/librte_net.so.24.1 00:02:42.781 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:42.781 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:42.781 [258/268] Linking target lib/librte_reorder.so.24.1 00:02:43.039 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.039 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.039 [261/268] Linking target lib/librte_hash.so.24.1 00:02:43.039 [262/268] Linking target lib/librte_cmdline.so.24.1 00:02:43.039 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:43.039 [264/268] Linking target lib/librte_security.so.24.1 00:02:43.298 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.298 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.298 [267/268] Linking target lib/librte_power.so.24.1 00:02:43.298 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:43.298 INFO: autodetecting backend as ninja 00:02:43.298 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:44.236 CC lib/ut_mock/mock.o 00:02:44.236 CC lib/ut/ut.o 00:02:44.236 CC lib/log/log.o 00:02:44.236 CC lib/log/log_flags.o 00:02:44.236 CC lib/log/log_deprecated.o 00:02:44.495 LIB libspdk_log.a 00:02:44.495 LIB libspdk_ut_mock.a 00:02:44.495 LIB libspdk_ut.a 00:02:44.495 SO libspdk_ut_mock.so.6.0 00:02:44.495 SO libspdk_log.so.7.0 00:02:44.495 SO libspdk_ut.so.2.0 00:02:44.495 SYMLINK libspdk_ut_mock.so 00:02:44.495 SYMLINK libspdk_log.so 00:02:44.495 SYMLINK libspdk_ut.so 00:02:44.753 CC lib/ioat/ioat.o 00:02:44.753 CC lib/dma/dma.o 00:02:44.753 CXX lib/trace_parser/trace.o 00:02:44.753 CC lib/util/base64.o 00:02:44.753 CC lib/util/bit_array.o 00:02:44.753 CC lib/util/cpuset.o 00:02:44.753 CC lib/util/crc16.o 00:02:44.753 CC lib/util/crc32.o 00:02:44.753 CC lib/util/crc32c.o 00:02:44.753 CC lib/util/crc32_ieee.o 00:02:44.753 CC lib/util/crc64.o 00:02:44.753 CC lib/util/dif.o 00:02:44.753 CC lib/util/fd.o 00:02:44.753 CC lib/util/fd_group.o 00:02:44.753 CC lib/util/file.o 00:02:44.753 CC lib/util/hexlify.o 00:02:44.753 CC lib/util/iov.o 00:02:44.753 CC lib/util/math.o 00:02:44.753 CC lib/util/net.o 00:02:44.753 CC lib/util/pipe.o 00:02:44.753 CC lib/util/strerror_tls.o 00:02:44.753 CC lib/util/string.o 00:02:44.753 CC lib/util/uuid.o 00:02:44.753 CC lib/util/xor.o 00:02:44.753 CC lib/util/zipf.o 00:02:45.012 CC lib/vfio_user/host/vfio_user_pci.o 00:02:45.012 CC lib/vfio_user/host/vfio_user.o 00:02:45.012 LIB libspdk_dma.a 00:02:45.012 SO libspdk_dma.so.4.0 00:02:45.012 LIB libspdk_ioat.a 00:02:45.012 SYMLINK libspdk_dma.so 00:02:45.012 SO libspdk_ioat.so.7.0 00:02:45.271 SYMLINK libspdk_ioat.so 00:02:45.271 LIB libspdk_vfio_user.a 00:02:45.271 SO libspdk_vfio_user.so.5.0 00:02:45.271 LIB libspdk_util.a 00:02:45.271 SYMLINK libspdk_vfio_user.so 00:02:45.271 SO libspdk_util.so.10.0 00:02:45.530 SYMLINK libspdk_util.so 00:02:45.530 LIB libspdk_trace_parser.a 00:02:45.530 SO libspdk_trace_parser.so.5.0 00:02:45.530 SYMLINK libspdk_trace_parser.so 00:02:45.788 CC lib/vmd/vmd.o 00:02:45.788 CC lib/vmd/led.o 00:02:45.788 CC lib/env_dpdk/env.o 00:02:45.788 CC lib/env_dpdk/memory.o 00:02:45.788 CC lib/env_dpdk/pci.o 00:02:45.788 CC lib/env_dpdk/init.o 00:02:45.788 CC lib/env_dpdk/threads.o 00:02:45.788 CC lib/env_dpdk/pci_ioat.o 00:02:45.788 CC lib/env_dpdk/pci_virtio.o 00:02:45.788 CC lib/env_dpdk/pci_vmd.o 00:02:45.788 CC lib/rdma_provider/common.o 00:02:45.788 CC lib/env_dpdk/pci_idxd.o 00:02:45.788 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:45.788 CC lib/env_dpdk/pci_event.o 00:02:45.788 CC lib/env_dpdk/pci_dpdk.o 00:02:45.788 CC lib/env_dpdk/sigbus_handler.o 00:02:45.788 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:45.788 CC lib/json/json_parse.o 00:02:45.788 CC lib/conf/conf.o 00:02:45.788 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:45.788 CC lib/json/json_util.o 00:02:45.788 CC lib/rdma_utils/rdma_utils.o 00:02:45.788 CC lib/json/json_write.o 00:02:45.788 CC lib/idxd/idxd.o 00:02:45.788 CC lib/idxd/idxd_user.o 00:02:45.788 CC lib/idxd/idxd_kernel.o 00:02:46.046 LIB libspdk_rdma_provider.a 00:02:46.046 LIB libspdk_conf.a 00:02:46.046 SO libspdk_rdma_provider.so.6.0 00:02:46.046 SO libspdk_conf.so.6.0 00:02:46.046 LIB libspdk_rdma_utils.a 00:02:46.046 LIB libspdk_json.a 00:02:46.046 SO libspdk_rdma_utils.so.1.0 00:02:46.046 SYMLINK libspdk_rdma_provider.so 00:02:46.046 SYMLINK libspdk_conf.so 00:02:46.046 SO libspdk_json.so.6.0 00:02:46.046 SYMLINK libspdk_rdma_utils.so 00:02:46.046 SYMLINK libspdk_json.so 00:02:46.046 LIB libspdk_idxd.a 00:02:46.304 SO libspdk_idxd.so.12.0 00:02:46.304 LIB libspdk_vmd.a 00:02:46.304 SO libspdk_vmd.so.6.0 00:02:46.304 SYMLINK libspdk_idxd.so 00:02:46.304 SYMLINK libspdk_vmd.so 00:02:46.304 CC lib/jsonrpc/jsonrpc_server.o 00:02:46.304 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:46.304 CC lib/jsonrpc/jsonrpc_client.o 00:02:46.304 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:46.562 LIB libspdk_jsonrpc.a 00:02:46.562 SO libspdk_jsonrpc.so.6.0 00:02:46.821 LIB libspdk_env_dpdk.a 00:02:46.821 SYMLINK libspdk_jsonrpc.so 00:02:46.821 SO libspdk_env_dpdk.so.15.0 00:02:46.821 SYMLINK libspdk_env_dpdk.so 00:02:47.079 CC lib/rpc/rpc.o 00:02:47.338 LIB libspdk_rpc.a 00:02:47.338 SO libspdk_rpc.so.6.0 00:02:47.338 SYMLINK libspdk_rpc.so 00:02:47.597 CC lib/notify/notify.o 00:02:47.597 CC lib/notify/notify_rpc.o 00:02:47.597 CC lib/keyring/keyring.o 00:02:47.597 CC lib/keyring/keyring_rpc.o 00:02:47.597 CC lib/trace/trace.o 00:02:47.597 CC lib/trace/trace_flags.o 00:02:47.597 CC lib/trace/trace_rpc.o 00:02:47.856 LIB libspdk_notify.a 00:02:47.856 SO libspdk_notify.so.6.0 00:02:47.856 LIB libspdk_keyring.a 00:02:47.856 SYMLINK libspdk_notify.so 00:02:47.856 LIB libspdk_trace.a 00:02:47.856 SO libspdk_keyring.so.1.0 00:02:47.856 SO libspdk_trace.so.10.0 00:02:47.856 SYMLINK libspdk_keyring.so 00:02:47.856 SYMLINK libspdk_trace.so 00:02:48.424 CC lib/thread/thread.o 00:02:48.424 CC lib/thread/iobuf.o 00:02:48.424 CC lib/sock/sock.o 00:02:48.424 CC lib/sock/sock_rpc.o 00:02:48.682 LIB libspdk_sock.a 00:02:48.682 SO libspdk_sock.so.10.0 00:02:48.682 SYMLINK libspdk_sock.so 00:02:48.941 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:48.941 CC lib/nvme/nvme_ctrlr.o 00:02:48.941 CC lib/nvme/nvme_fabric.o 00:02:48.941 CC lib/nvme/nvme_ns_cmd.o 00:02:48.941 CC lib/nvme/nvme_ns.o 00:02:48.941 CC lib/nvme/nvme_pcie_common.o 00:02:48.941 CC lib/nvme/nvme_pcie.o 00:02:48.941 CC lib/nvme/nvme_qpair.o 00:02:48.941 CC lib/nvme/nvme.o 00:02:48.941 CC lib/nvme/nvme_quirks.o 00:02:48.941 CC lib/nvme/nvme_transport.o 00:02:48.941 CC lib/nvme/nvme_discovery.o 00:02:48.941 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:48.941 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:48.941 CC lib/nvme/nvme_tcp.o 00:02:48.941 CC lib/nvme/nvme_opal.o 00:02:48.941 CC lib/nvme/nvme_io_msg.o 00:02:48.941 CC lib/nvme/nvme_poll_group.o 00:02:48.941 CC lib/nvme/nvme_zns.o 00:02:48.941 CC lib/nvme/nvme_stubs.o 00:02:48.941 CC lib/nvme/nvme_auth.o 00:02:48.941 CC lib/nvme/nvme_cuse.o 00:02:48.941 CC lib/nvme/nvme_rdma.o 00:02:49.199 LIB libspdk_thread.a 00:02:49.199 SO libspdk_thread.so.10.1 00:02:49.458 SYMLINK libspdk_thread.so 00:02:49.716 CC lib/blob/blobstore.o 00:02:49.716 CC lib/blob/request.o 00:02:49.716 CC lib/accel/accel.o 00:02:49.716 CC lib/blob/zeroes.o 00:02:49.716 CC lib/accel/accel_rpc.o 00:02:49.716 CC lib/blob/blob_bs_dev.o 00:02:49.716 CC lib/init/json_config.o 00:02:49.716 CC lib/accel/accel_sw.o 00:02:49.716 CC lib/init/subsystem.o 00:02:49.716 CC lib/init/subsystem_rpc.o 00:02:49.716 CC lib/init/rpc.o 00:02:49.716 CC lib/virtio/virtio.o 00:02:49.716 CC lib/virtio/virtio_vhost_user.o 00:02:49.716 CC lib/virtio/virtio_vfio_user.o 00:02:49.716 CC lib/virtio/virtio_pci.o 00:02:49.975 LIB libspdk_init.a 00:02:49.975 SO libspdk_init.so.5.0 00:02:49.975 LIB libspdk_virtio.a 00:02:49.975 SYMLINK libspdk_init.so 00:02:49.975 SO libspdk_virtio.so.7.0 00:02:49.975 SYMLINK libspdk_virtio.so 00:02:50.233 CC lib/event/app.o 00:02:50.233 CC lib/event/reactor.o 00:02:50.233 CC lib/event/log_rpc.o 00:02:50.233 CC lib/event/app_rpc.o 00:02:50.233 CC lib/event/scheduler_static.o 00:02:50.492 LIB libspdk_accel.a 00:02:50.492 SO libspdk_accel.so.16.0 00:02:50.492 SYMLINK libspdk_accel.so 00:02:50.492 LIB libspdk_nvme.a 00:02:50.492 LIB libspdk_event.a 00:02:50.492 SO libspdk_event.so.14.0 00:02:50.750 SO libspdk_nvme.so.13.1 00:02:50.750 SYMLINK libspdk_event.so 00:02:50.750 CC lib/bdev/bdev.o 00:02:50.750 CC lib/bdev/bdev_rpc.o 00:02:50.750 CC lib/bdev/bdev_zone.o 00:02:50.750 CC lib/bdev/part.o 00:02:50.750 CC lib/bdev/scsi_nvme.o 00:02:51.009 SYMLINK libspdk_nvme.so 00:02:51.946 LIB libspdk_blob.a 00:02:51.946 SO libspdk_blob.so.11.0 00:02:51.946 SYMLINK libspdk_blob.so 00:02:52.205 CC lib/blobfs/blobfs.o 00:02:52.205 CC lib/blobfs/tree.o 00:02:52.205 CC lib/lvol/lvol.o 00:02:52.463 LIB libspdk_bdev.a 00:02:52.463 SO libspdk_bdev.so.16.0 00:02:52.721 SYMLINK libspdk_bdev.so 00:02:52.721 LIB libspdk_blobfs.a 00:02:52.721 SO libspdk_blobfs.so.10.0 00:02:52.721 LIB libspdk_lvol.a 00:02:52.721 SYMLINK libspdk_blobfs.so 00:02:52.721 SO libspdk_lvol.so.10.0 00:02:52.979 SYMLINK libspdk_lvol.so 00:02:52.979 CC lib/ublk/ublk.o 00:02:52.979 CC lib/nvmf/ctrlr.o 00:02:52.979 CC lib/ublk/ublk_rpc.o 00:02:52.979 CC lib/nvmf/ctrlr_discovery.o 00:02:52.979 CC lib/nbd/nbd.o 00:02:52.979 CC lib/nvmf/ctrlr_bdev.o 00:02:52.979 CC lib/nbd/nbd_rpc.o 00:02:52.979 CC lib/nvmf/subsystem.o 00:02:52.979 CC lib/ftl/ftl_core.o 00:02:52.979 CC lib/nvmf/nvmf.o 00:02:52.979 CC lib/ftl/ftl_init.o 00:02:52.979 CC lib/nvmf/nvmf_rpc.o 00:02:52.979 CC lib/ftl/ftl_layout.o 00:02:52.979 CC lib/nvmf/transport.o 00:02:52.979 CC lib/scsi/dev.o 00:02:52.979 CC lib/ftl/ftl_debug.o 00:02:52.979 CC lib/nvmf/tcp.o 00:02:52.979 CC lib/scsi/lun.o 00:02:52.979 CC lib/nvmf/stubs.o 00:02:52.979 CC lib/ftl/ftl_io.o 00:02:52.979 CC lib/scsi/port.o 00:02:52.979 CC lib/nvmf/mdns_server.o 00:02:52.979 CC lib/scsi/scsi.o 00:02:52.979 CC lib/nvmf/rdma.o 00:02:52.979 CC lib/ftl/ftl_sb.o 00:02:52.979 CC lib/scsi/scsi_bdev.o 00:02:52.979 CC lib/ftl/ftl_l2p.o 00:02:52.979 CC lib/scsi/scsi_pr.o 00:02:52.979 CC lib/nvmf/auth.o 00:02:52.979 CC lib/ftl/ftl_l2p_flat.o 00:02:52.980 CC lib/scsi/scsi_rpc.o 00:02:52.980 CC lib/ftl/ftl_nv_cache.o 00:02:52.980 CC lib/ftl/ftl_band.o 00:02:52.980 CC lib/scsi/task.o 00:02:52.980 CC lib/ftl/ftl_band_ops.o 00:02:52.980 CC lib/ftl/ftl_writer.o 00:02:52.980 CC lib/ftl/ftl_rq.o 00:02:52.980 CC lib/ftl/ftl_reloc.o 00:02:52.980 CC lib/ftl/ftl_l2p_cache.o 00:02:52.980 CC lib/ftl/ftl_p2l.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:52.980 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:52.980 CC lib/ftl/utils/ftl_conf.o 00:02:52.980 CC lib/ftl/utils/ftl_md.o 00:02:52.980 CC lib/ftl/utils/ftl_bitmap.o 00:02:52.980 CC lib/ftl/utils/ftl_mempool.o 00:02:52.980 CC lib/ftl/utils/ftl_property.o 00:02:52.980 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:52.980 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:52.980 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:52.980 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:52.980 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:52.980 CC lib/ftl/base/ftl_base_dev.o 00:02:52.980 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:52.980 CC lib/ftl/ftl_trace.o 00:02:52.980 CC lib/ftl/base/ftl_base_bdev.o 00:02:53.547 LIB libspdk_nbd.a 00:02:53.547 SO libspdk_nbd.so.7.0 00:02:53.547 SYMLINK libspdk_nbd.so 00:02:53.547 LIB libspdk_ublk.a 00:02:53.547 LIB libspdk_scsi.a 00:02:53.547 SO libspdk_ublk.so.3.0 00:02:53.547 SO libspdk_scsi.so.9.0 00:02:53.806 SYMLINK libspdk_ublk.so 00:02:53.806 SYMLINK libspdk_scsi.so 00:02:53.806 LIB libspdk_ftl.a 00:02:54.064 SO libspdk_ftl.so.9.0 00:02:54.064 CC lib/vhost/vhost.o 00:02:54.064 CC lib/vhost/vhost_rpc.o 00:02:54.064 CC lib/iscsi/conn.o 00:02:54.064 CC lib/vhost/vhost_scsi.o 00:02:54.064 CC lib/iscsi/init_grp.o 00:02:54.064 CC lib/vhost/vhost_blk.o 00:02:54.064 CC lib/iscsi/iscsi.o 00:02:54.064 CC lib/vhost/rte_vhost_user.o 00:02:54.064 CC lib/iscsi/md5.o 00:02:54.064 CC lib/iscsi/param.o 00:02:54.064 CC lib/iscsi/portal_grp.o 00:02:54.064 CC lib/iscsi/tgt_node.o 00:02:54.064 CC lib/iscsi/iscsi_subsystem.o 00:02:54.064 CC lib/iscsi/iscsi_rpc.o 00:02:54.064 CC lib/iscsi/task.o 00:02:54.323 SYMLINK libspdk_ftl.so 00:02:54.582 LIB libspdk_nvmf.a 00:02:54.582 SO libspdk_nvmf.so.19.0 00:02:54.840 SYMLINK libspdk_nvmf.so 00:02:54.840 LIB libspdk_vhost.a 00:02:54.840 SO libspdk_vhost.so.8.0 00:02:54.840 SYMLINK libspdk_vhost.so 00:02:55.099 LIB libspdk_iscsi.a 00:02:55.099 SO libspdk_iscsi.so.8.0 00:02:55.099 SYMLINK libspdk_iscsi.so 00:02:55.667 CC module/env_dpdk/env_dpdk_rpc.o 00:02:55.926 CC module/blob/bdev/blob_bdev.o 00:02:55.926 LIB libspdk_env_dpdk_rpc.a 00:02:55.926 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:55.926 CC module/keyring/file/keyring.o 00:02:55.926 CC module/sock/posix/posix.o 00:02:55.926 CC module/keyring/file/keyring_rpc.o 00:02:55.926 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:55.926 CC module/accel/dsa/accel_dsa.o 00:02:55.926 CC module/accel/dsa/accel_dsa_rpc.o 00:02:55.926 CC module/accel/error/accel_error.o 00:02:55.926 CC module/accel/error/accel_error_rpc.o 00:02:55.926 CC module/scheduler/gscheduler/gscheduler.o 00:02:55.926 CC module/accel/ioat/accel_ioat_rpc.o 00:02:55.926 CC module/accel/ioat/accel_ioat.o 00:02:55.926 CC module/accel/iaa/accel_iaa.o 00:02:55.926 CC module/accel/iaa/accel_iaa_rpc.o 00:02:55.926 CC module/keyring/linux/keyring.o 00:02:55.926 CC module/keyring/linux/keyring_rpc.o 00:02:55.926 SO libspdk_env_dpdk_rpc.so.6.0 00:02:55.926 SYMLINK libspdk_env_dpdk_rpc.so 00:02:55.926 LIB libspdk_keyring_file.a 00:02:55.926 LIB libspdk_keyring_linux.a 00:02:55.926 LIB libspdk_scheduler_dpdk_governor.a 00:02:55.926 LIB libspdk_scheduler_dynamic.a 00:02:55.926 LIB libspdk_scheduler_gscheduler.a 00:02:55.926 LIB libspdk_accel_error.a 00:02:55.926 SO libspdk_keyring_linux.so.1.0 00:02:55.926 SO libspdk_keyring_file.so.1.0 00:02:56.184 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:56.185 LIB libspdk_accel_ioat.a 00:02:56.185 SO libspdk_scheduler_gscheduler.so.4.0 00:02:56.185 SO libspdk_scheduler_dynamic.so.4.0 00:02:56.185 SO libspdk_accel_error.so.2.0 00:02:56.185 LIB libspdk_accel_iaa.a 00:02:56.185 LIB libspdk_accel_dsa.a 00:02:56.185 SO libspdk_accel_ioat.so.6.0 00:02:56.185 LIB libspdk_blob_bdev.a 00:02:56.185 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:56.185 SYMLINK libspdk_keyring_file.so 00:02:56.185 SYMLINK libspdk_scheduler_gscheduler.so 00:02:56.185 SYMLINK libspdk_keyring_linux.so 00:02:56.185 SO libspdk_accel_iaa.so.3.0 00:02:56.185 SO libspdk_accel_dsa.so.5.0 00:02:56.185 SYMLINK libspdk_scheduler_dynamic.so 00:02:56.185 SO libspdk_blob_bdev.so.11.0 00:02:56.185 SYMLINK libspdk_accel_error.so 00:02:56.185 SYMLINK libspdk_accel_ioat.so 00:02:56.185 SYMLINK libspdk_accel_iaa.so 00:02:56.185 SYMLINK libspdk_accel_dsa.so 00:02:56.185 SYMLINK libspdk_blob_bdev.so 00:02:56.444 LIB libspdk_sock_posix.a 00:02:56.444 SO libspdk_sock_posix.so.6.0 00:02:56.444 SYMLINK libspdk_sock_posix.so 00:02:56.711 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:56.711 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:56.711 CC module/bdev/lvol/vbdev_lvol.o 00:02:56.711 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:56.711 CC module/bdev/raid/bdev_raid.o 00:02:56.711 CC module/bdev/passthru/vbdev_passthru.o 00:02:56.711 CC module/bdev/raid/bdev_raid_rpc.o 00:02:56.711 CC module/bdev/raid/bdev_raid_sb.o 00:02:56.711 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:56.711 CC module/bdev/raid/raid0.o 00:02:56.711 CC module/bdev/delay/vbdev_delay.o 00:02:56.711 CC module/bdev/raid/raid1.o 00:02:56.711 CC module/bdev/raid/concat.o 00:02:56.711 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:56.711 CC module/blobfs/bdev/blobfs_bdev.o 00:02:56.711 CC module/bdev/gpt/gpt.o 00:02:56.711 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:56.711 CC module/bdev/gpt/vbdev_gpt.o 00:02:56.711 CC module/bdev/iscsi/bdev_iscsi.o 00:02:56.711 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:56.711 CC module/bdev/ftl/bdev_ftl.o 00:02:56.711 CC module/bdev/split/vbdev_split.o 00:02:56.711 CC module/bdev/malloc/bdev_malloc.o 00:02:56.711 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:56.711 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:56.711 CC module/bdev/split/vbdev_split_rpc.o 00:02:56.711 CC module/bdev/error/vbdev_error.o 00:02:56.711 CC module/bdev/nvme/bdev_nvme.o 00:02:56.711 CC module/bdev/error/vbdev_error_rpc.o 00:02:56.711 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:56.711 CC module/bdev/null/bdev_null.o 00:02:56.711 CC module/bdev/nvme/nvme_rpc.o 00:02:56.711 CC module/bdev/aio/bdev_aio.o 00:02:56.711 CC module/bdev/null/bdev_null_rpc.o 00:02:56.711 CC module/bdev/nvme/bdev_mdns_client.o 00:02:56.711 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:56.711 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:56.711 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:56.711 CC module/bdev/nvme/vbdev_opal.o 00:02:56.711 CC module/bdev/aio/bdev_aio_rpc.o 00:02:56.711 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:56.711 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:56.984 LIB libspdk_blobfs_bdev.a 00:02:56.984 SO libspdk_blobfs_bdev.so.6.0 00:02:56.984 LIB libspdk_bdev_split.a 00:02:56.984 LIB libspdk_bdev_null.a 00:02:56.984 SO libspdk_bdev_split.so.6.0 00:02:56.984 LIB libspdk_bdev_error.a 00:02:56.984 LIB libspdk_bdev_gpt.a 00:02:56.984 SO libspdk_bdev_null.so.6.0 00:02:56.984 LIB libspdk_bdev_passthru.a 00:02:56.984 SYMLINK libspdk_blobfs_bdev.so 00:02:56.984 LIB libspdk_bdev_ftl.a 00:02:56.984 LIB libspdk_bdev_zone_block.a 00:02:56.984 SO libspdk_bdev_error.so.6.0 00:02:56.984 SO libspdk_bdev_passthru.so.6.0 00:02:56.984 SO libspdk_bdev_gpt.so.6.0 00:02:56.984 LIB libspdk_bdev_delay.a 00:02:56.984 SYMLINK libspdk_bdev_split.so 00:02:56.984 LIB libspdk_bdev_aio.a 00:02:56.984 SO libspdk_bdev_ftl.so.6.0 00:02:56.984 SO libspdk_bdev_zone_block.so.6.0 00:02:56.984 LIB libspdk_bdev_iscsi.a 00:02:56.984 SYMLINK libspdk_bdev_null.so 00:02:56.984 SO libspdk_bdev_aio.so.6.0 00:02:56.984 SO libspdk_bdev_delay.so.6.0 00:02:56.984 SYMLINK libspdk_bdev_gpt.so 00:02:56.984 SO libspdk_bdev_iscsi.so.6.0 00:02:56.984 SYMLINK libspdk_bdev_error.so 00:02:56.984 SYMLINK libspdk_bdev_passthru.so 00:02:56.984 LIB libspdk_bdev_malloc.a 00:02:56.984 SYMLINK libspdk_bdev_ftl.so 00:02:56.984 SYMLINK libspdk_bdev_zone_block.so 00:02:56.985 SO libspdk_bdev_malloc.so.6.0 00:02:57.243 SYMLINK libspdk_bdev_aio.so 00:02:57.243 LIB libspdk_bdev_lvol.a 00:02:57.243 SYMLINK libspdk_bdev_delay.so 00:02:57.243 SYMLINK libspdk_bdev_iscsi.so 00:02:57.243 LIB libspdk_bdev_virtio.a 00:02:57.243 SO libspdk_bdev_lvol.so.6.0 00:02:57.243 SYMLINK libspdk_bdev_malloc.so 00:02:57.243 SO libspdk_bdev_virtio.so.6.0 00:02:57.243 SYMLINK libspdk_bdev_lvol.so 00:02:57.243 SYMLINK libspdk_bdev_virtio.so 00:02:57.502 LIB libspdk_bdev_raid.a 00:02:57.502 SO libspdk_bdev_raid.so.6.0 00:02:57.502 SYMLINK libspdk_bdev_raid.so 00:02:58.438 LIB libspdk_bdev_nvme.a 00:02:58.438 SO libspdk_bdev_nvme.so.7.0 00:02:58.438 SYMLINK libspdk_bdev_nvme.so 00:02:59.005 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:59.005 CC module/event/subsystems/vmd/vmd.o 00:02:59.005 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:59.005 CC module/event/subsystems/iobuf/iobuf.o 00:02:59.005 CC module/event/subsystems/keyring/keyring.o 00:02:59.005 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:59.005 CC module/event/subsystems/sock/sock.o 00:02:59.005 CC module/event/subsystems/scheduler/scheduler.o 00:02:59.264 LIB libspdk_event_vhost_blk.a 00:02:59.264 LIB libspdk_event_keyring.a 00:02:59.264 LIB libspdk_event_scheduler.a 00:02:59.264 LIB libspdk_event_vmd.a 00:02:59.264 LIB libspdk_event_iobuf.a 00:02:59.264 LIB libspdk_event_sock.a 00:02:59.264 SO libspdk_event_scheduler.so.4.0 00:02:59.264 SO libspdk_event_vhost_blk.so.3.0 00:02:59.264 SO libspdk_event_vmd.so.6.0 00:02:59.264 SO libspdk_event_keyring.so.1.0 00:02:59.264 SO libspdk_event_iobuf.so.3.0 00:02:59.264 SO libspdk_event_sock.so.5.0 00:02:59.264 SYMLINK libspdk_event_scheduler.so 00:02:59.264 SYMLINK libspdk_event_vhost_blk.so 00:02:59.264 SYMLINK libspdk_event_keyring.so 00:02:59.264 SYMLINK libspdk_event_vmd.so 00:02:59.264 SYMLINK libspdk_event_iobuf.so 00:02:59.264 SYMLINK libspdk_event_sock.so 00:02:59.524 CC module/event/subsystems/accel/accel.o 00:02:59.783 LIB libspdk_event_accel.a 00:02:59.783 SO libspdk_event_accel.so.6.0 00:02:59.783 SYMLINK libspdk_event_accel.so 00:03:00.042 CC module/event/subsystems/bdev/bdev.o 00:03:00.301 LIB libspdk_event_bdev.a 00:03:00.301 SO libspdk_event_bdev.so.6.0 00:03:00.301 SYMLINK libspdk_event_bdev.so 00:03:00.560 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:00.560 CC module/event/subsystems/ublk/ublk.o 00:03:00.560 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:00.560 CC module/event/subsystems/scsi/scsi.o 00:03:00.560 CC module/event/subsystems/nbd/nbd.o 00:03:00.819 LIB libspdk_event_ublk.a 00:03:00.819 LIB libspdk_event_nbd.a 00:03:00.819 LIB libspdk_event_scsi.a 00:03:00.819 SO libspdk_event_ublk.so.3.0 00:03:00.819 SO libspdk_event_nbd.so.6.0 00:03:00.819 LIB libspdk_event_nvmf.a 00:03:00.819 SO libspdk_event_scsi.so.6.0 00:03:00.819 SYMLINK libspdk_event_ublk.so 00:03:00.819 SO libspdk_event_nvmf.so.6.0 00:03:00.819 SYMLINK libspdk_event_nbd.so 00:03:00.819 SYMLINK libspdk_event_scsi.so 00:03:00.819 SYMLINK libspdk_event_nvmf.so 00:03:01.084 CC module/event/subsystems/iscsi/iscsi.o 00:03:01.084 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:01.343 LIB libspdk_event_vhost_scsi.a 00:03:01.343 LIB libspdk_event_iscsi.a 00:03:01.343 SO libspdk_event_vhost_scsi.so.3.0 00:03:01.343 SO libspdk_event_iscsi.so.6.0 00:03:01.343 SYMLINK libspdk_event_vhost_scsi.so 00:03:01.343 SYMLINK libspdk_event_iscsi.so 00:03:01.603 SO libspdk.so.6.0 00:03:01.603 SYMLINK libspdk.so 00:03:01.861 CC app/spdk_lspci/spdk_lspci.o 00:03:01.861 TEST_HEADER include/spdk/accel.h 00:03:01.861 TEST_HEADER include/spdk/accel_module.h 00:03:01.861 TEST_HEADER include/spdk/assert.h 00:03:01.861 TEST_HEADER include/spdk/barrier.h 00:03:01.861 CC app/trace_record/trace_record.o 00:03:01.861 CC app/spdk_nvme_discover/discovery_aer.o 00:03:01.861 TEST_HEADER include/spdk/base64.h 00:03:01.861 TEST_HEADER include/spdk/bdev.h 00:03:01.861 TEST_HEADER include/spdk/bdev_module.h 00:03:01.861 TEST_HEADER include/spdk/bdev_zone.h 00:03:01.861 TEST_HEADER include/spdk/bit_array.h 00:03:01.861 CXX app/trace/trace.o 00:03:01.861 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:01.861 TEST_HEADER include/spdk/bit_pool.h 00:03:01.861 TEST_HEADER include/spdk/blob_bdev.h 00:03:01.861 TEST_HEADER include/spdk/blobfs.h 00:03:01.861 TEST_HEADER include/spdk/blob.h 00:03:01.861 CC app/spdk_nvme_perf/perf.o 00:03:01.861 CC app/spdk_nvme_identify/identify.o 00:03:01.861 TEST_HEADER include/spdk/conf.h 00:03:01.861 CC app/spdk_top/spdk_top.o 00:03:01.861 TEST_HEADER include/spdk/config.h 00:03:01.861 CC test/rpc_client/rpc_client_test.o 00:03:01.861 TEST_HEADER include/spdk/cpuset.h 00:03:01.861 TEST_HEADER include/spdk/crc16.h 00:03:01.861 TEST_HEADER include/spdk/crc32.h 00:03:01.861 TEST_HEADER include/spdk/dif.h 00:03:01.861 TEST_HEADER include/spdk/crc64.h 00:03:01.861 TEST_HEADER include/spdk/dma.h 00:03:01.861 TEST_HEADER include/spdk/endian.h 00:03:01.861 TEST_HEADER include/spdk/env.h 00:03:01.861 TEST_HEADER include/spdk/env_dpdk.h 00:03:01.861 TEST_HEADER include/spdk/event.h 00:03:01.861 TEST_HEADER include/spdk/fd_group.h 00:03:01.861 TEST_HEADER include/spdk/fd.h 00:03:01.861 TEST_HEADER include/spdk/file.h 00:03:01.861 TEST_HEADER include/spdk/ftl.h 00:03:01.861 TEST_HEADER include/spdk/gpt_spec.h 00:03:01.861 TEST_HEADER include/spdk/hexlify.h 00:03:01.861 TEST_HEADER include/spdk/histogram_data.h 00:03:01.861 TEST_HEADER include/spdk/idxd.h 00:03:01.861 TEST_HEADER include/spdk/idxd_spec.h 00:03:01.861 TEST_HEADER include/spdk/init.h 00:03:01.861 TEST_HEADER include/spdk/ioat.h 00:03:01.861 TEST_HEADER include/spdk/iscsi_spec.h 00:03:01.861 TEST_HEADER include/spdk/json.h 00:03:01.861 TEST_HEADER include/spdk/ioat_spec.h 00:03:01.861 TEST_HEADER include/spdk/jsonrpc.h 00:03:01.861 TEST_HEADER include/spdk/keyring.h 00:03:01.861 TEST_HEADER include/spdk/keyring_module.h 00:03:02.127 TEST_HEADER include/spdk/likely.h 00:03:02.127 TEST_HEADER include/spdk/log.h 00:03:02.127 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:02.127 TEST_HEADER include/spdk/memory.h 00:03:02.127 TEST_HEADER include/spdk/lvol.h 00:03:02.127 TEST_HEADER include/spdk/mmio.h 00:03:02.127 TEST_HEADER include/spdk/net.h 00:03:02.127 TEST_HEADER include/spdk/nbd.h 00:03:02.127 TEST_HEADER include/spdk/notify.h 00:03:02.127 TEST_HEADER include/spdk/nvme_intel.h 00:03:02.127 TEST_HEADER include/spdk/nvme.h 00:03:02.127 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:02.127 TEST_HEADER include/spdk/nvme_spec.h 00:03:02.127 TEST_HEADER include/spdk/nvme_zns.h 00:03:02.127 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:02.127 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:02.127 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:02.127 TEST_HEADER include/spdk/nvmf.h 00:03:02.127 TEST_HEADER include/spdk/nvmf_spec.h 00:03:02.127 TEST_HEADER include/spdk/nvmf_transport.h 00:03:02.127 TEST_HEADER include/spdk/opal.h 00:03:02.127 TEST_HEADER include/spdk/opal_spec.h 00:03:02.127 TEST_HEADER include/spdk/pci_ids.h 00:03:02.127 TEST_HEADER include/spdk/queue.h 00:03:02.127 TEST_HEADER include/spdk/pipe.h 00:03:02.127 CC app/iscsi_tgt/iscsi_tgt.o 00:03:02.127 TEST_HEADER include/spdk/reduce.h 00:03:02.127 TEST_HEADER include/spdk/rpc.h 00:03:02.127 TEST_HEADER include/spdk/scheduler.h 00:03:02.127 TEST_HEADER include/spdk/scsi.h 00:03:02.127 TEST_HEADER include/spdk/scsi_spec.h 00:03:02.127 CC app/spdk_dd/spdk_dd.o 00:03:02.127 CC app/nvmf_tgt/nvmf_main.o 00:03:02.127 TEST_HEADER include/spdk/sock.h 00:03:02.127 TEST_HEADER include/spdk/stdinc.h 00:03:02.127 TEST_HEADER include/spdk/string.h 00:03:02.127 TEST_HEADER include/spdk/thread.h 00:03:02.127 TEST_HEADER include/spdk/trace.h 00:03:02.127 TEST_HEADER include/spdk/tree.h 00:03:02.127 TEST_HEADER include/spdk/trace_parser.h 00:03:02.127 TEST_HEADER include/spdk/ublk.h 00:03:02.127 TEST_HEADER include/spdk/uuid.h 00:03:02.127 TEST_HEADER include/spdk/util.h 00:03:02.127 TEST_HEADER include/spdk/version.h 00:03:02.127 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:02.127 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:02.127 TEST_HEADER include/spdk/vhost.h 00:03:02.127 TEST_HEADER include/spdk/zipf.h 00:03:02.127 TEST_HEADER include/spdk/vmd.h 00:03:02.127 TEST_HEADER include/spdk/xor.h 00:03:02.127 CXX test/cpp_headers/accel.o 00:03:02.127 CXX test/cpp_headers/accel_module.o 00:03:02.127 CXX test/cpp_headers/barrier.o 00:03:02.127 CXX test/cpp_headers/assert.o 00:03:02.127 CXX test/cpp_headers/bdev.o 00:03:02.127 CXX test/cpp_headers/base64.o 00:03:02.127 CXX test/cpp_headers/bdev_module.o 00:03:02.127 CC app/spdk_tgt/spdk_tgt.o 00:03:02.127 CXX test/cpp_headers/bdev_zone.o 00:03:02.127 CXX test/cpp_headers/blob_bdev.o 00:03:02.127 CXX test/cpp_headers/bit_array.o 00:03:02.127 CXX test/cpp_headers/bit_pool.o 00:03:02.127 CXX test/cpp_headers/blobfs_bdev.o 00:03:02.127 CXX test/cpp_headers/blobfs.o 00:03:02.127 CXX test/cpp_headers/blob.o 00:03:02.127 CXX test/cpp_headers/conf.o 00:03:02.127 CXX test/cpp_headers/config.o 00:03:02.127 CXX test/cpp_headers/crc16.o 00:03:02.127 CXX test/cpp_headers/cpuset.o 00:03:02.127 CXX test/cpp_headers/crc32.o 00:03:02.127 CXX test/cpp_headers/crc64.o 00:03:02.127 CXX test/cpp_headers/dif.o 00:03:02.127 CXX test/cpp_headers/dma.o 00:03:02.127 CXX test/cpp_headers/endian.o 00:03:02.127 CXX test/cpp_headers/env_dpdk.o 00:03:02.127 CXX test/cpp_headers/fd_group.o 00:03:02.127 CXX test/cpp_headers/env.o 00:03:02.127 CXX test/cpp_headers/fd.o 00:03:02.127 CXX test/cpp_headers/event.o 00:03:02.127 CXX test/cpp_headers/file.o 00:03:02.127 CXX test/cpp_headers/gpt_spec.o 00:03:02.127 CXX test/cpp_headers/ftl.o 00:03:02.127 CXX test/cpp_headers/hexlify.o 00:03:02.127 CXX test/cpp_headers/histogram_data.o 00:03:02.127 CXX test/cpp_headers/idxd.o 00:03:02.127 CXX test/cpp_headers/idxd_spec.o 00:03:02.127 CXX test/cpp_headers/init.o 00:03:02.127 CXX test/cpp_headers/iscsi_spec.o 00:03:02.127 CXX test/cpp_headers/ioat.o 00:03:02.127 CXX test/cpp_headers/json.o 00:03:02.127 CXX test/cpp_headers/ioat_spec.o 00:03:02.127 CXX test/cpp_headers/jsonrpc.o 00:03:02.127 CXX test/cpp_headers/keyring.o 00:03:02.127 CXX test/cpp_headers/keyring_module.o 00:03:02.127 CXX test/cpp_headers/likely.o 00:03:02.127 CXX test/cpp_headers/log.o 00:03:02.127 CXX test/cpp_headers/memory.o 00:03:02.127 CXX test/cpp_headers/lvol.o 00:03:02.127 CXX test/cpp_headers/mmio.o 00:03:02.127 CXX test/cpp_headers/nbd.o 00:03:02.127 CXX test/cpp_headers/net.o 00:03:02.127 CXX test/cpp_headers/nvme.o 00:03:02.127 CXX test/cpp_headers/notify.o 00:03:02.127 CXX test/cpp_headers/nvme_ocssd.o 00:03:02.127 CXX test/cpp_headers/nvme_intel.o 00:03:02.128 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.128 CXX test/cpp_headers/nvme_spec.o 00:03:02.128 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.128 CXX test/cpp_headers/nvme_zns.o 00:03:02.128 CXX test/cpp_headers/nvmf.o 00:03:02.128 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:02.128 CXX test/cpp_headers/nvmf_transport.o 00:03:02.128 CXX test/cpp_headers/nvmf_spec.o 00:03:02.128 CXX test/cpp_headers/opal_spec.o 00:03:02.128 CXX test/cpp_headers/opal.o 00:03:02.128 CXX test/cpp_headers/pci_ids.o 00:03:02.128 CXX test/cpp_headers/pipe.o 00:03:02.128 CXX test/cpp_headers/queue.o 00:03:02.128 CC test/thread/poller_perf/poller_perf.o 00:03:02.128 CC examples/util/zipf/zipf.o 00:03:02.128 CC test/app/jsoncat/jsoncat.o 00:03:02.128 CC examples/ioat/perf/perf.o 00:03:02.128 CC test/env/memory/memory_ut.o 00:03:02.128 CC app/fio/nvme/fio_plugin.o 00:03:02.128 CC test/app/stub/stub.o 00:03:02.128 CC examples/ioat/verify/verify.o 00:03:02.128 CC test/app/histogram_perf/histogram_perf.o 00:03:02.128 CC test/env/vtophys/vtophys.o 00:03:02.128 CC test/dma/test_dma/test_dma.o 00:03:02.128 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:02.128 CC test/env/pci/pci_ut.o 00:03:02.128 CC test/app/bdev_svc/bdev_svc.o 00:03:02.128 CC app/fio/bdev/fio_plugin.o 00:03:02.391 CXX test/cpp_headers/reduce.o 00:03:02.391 LINK spdk_lspci 00:03:02.391 LINK rpc_client_test 00:03:02.652 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:02.652 LINK spdk_trace_record 00:03:02.652 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:02.652 LINK spdk_nvme_discover 00:03:02.652 LINK jsoncat 00:03:02.652 CC test/env/mem_callbacks/mem_callbacks.o 00:03:02.652 LINK interrupt_tgt 00:03:02.652 LINK iscsi_tgt 00:03:02.652 LINK zipf 00:03:02.652 CXX test/cpp_headers/rpc.o 00:03:02.652 CXX test/cpp_headers/scheduler.o 00:03:02.652 CXX test/cpp_headers/scsi.o 00:03:02.652 LINK nvmf_tgt 00:03:02.652 CXX test/cpp_headers/scsi_spec.o 00:03:02.652 CXX test/cpp_headers/sock.o 00:03:02.652 CXX test/cpp_headers/stdinc.o 00:03:02.652 CXX test/cpp_headers/string.o 00:03:02.652 LINK spdk_tgt 00:03:02.652 CXX test/cpp_headers/thread.o 00:03:02.652 CXX test/cpp_headers/trace_parser.o 00:03:02.652 CXX test/cpp_headers/trace.o 00:03:02.652 CXX test/cpp_headers/tree.o 00:03:02.652 CXX test/cpp_headers/ublk.o 00:03:02.652 CXX test/cpp_headers/util.o 00:03:02.652 CXX test/cpp_headers/uuid.o 00:03:02.652 CXX test/cpp_headers/version.o 00:03:02.652 CXX test/cpp_headers/vfio_user_pci.o 00:03:02.652 CXX test/cpp_headers/vfio_user_spec.o 00:03:02.652 CXX test/cpp_headers/vhost.o 00:03:02.652 CXX test/cpp_headers/vmd.o 00:03:02.652 CXX test/cpp_headers/xor.o 00:03:02.652 CXX test/cpp_headers/zipf.o 00:03:02.652 LINK ioat_perf 00:03:02.652 LINK verify 00:03:02.652 LINK histogram_perf 00:03:02.652 LINK vtophys 00:03:02.910 LINK poller_perf 00:03:02.910 LINK env_dpdk_post_init 00:03:02.910 LINK stub 00:03:02.910 LINK spdk_dd 00:03:02.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:02.910 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:02.910 LINK bdev_svc 00:03:02.910 LINK spdk_trace 00:03:02.910 LINK test_dma 00:03:02.910 LINK pci_ut 00:03:03.168 LINK nvme_fuzz 00:03:03.168 LINK spdk_nvme 00:03:03.168 LINK spdk_bdev 00:03:03.168 CC examples/vmd/lsvmd/lsvmd.o 00:03:03.168 CC examples/idxd/perf/perf.o 00:03:03.168 LINK spdk_nvme_perf 00:03:03.168 CC examples/vmd/led/led.o 00:03:03.168 CC examples/sock/hello_world/hello_sock.o 00:03:03.168 CC examples/thread/thread/thread_ex.o 00:03:03.168 LINK spdk_nvme_identify 00:03:03.168 CC test/event/event_perf/event_perf.o 00:03:03.168 CC test/event/reactor_perf/reactor_perf.o 00:03:03.168 LINK spdk_top 00:03:03.168 CC test/event/reactor/reactor.o 00:03:03.168 CC app/vhost/vhost.o 00:03:03.168 LINK vhost_fuzz 00:03:03.168 CC test/event/app_repeat/app_repeat.o 00:03:03.168 CC test/event/scheduler/scheduler.o 00:03:03.426 LINK lsvmd 00:03:03.426 LINK led 00:03:03.426 LINK mem_callbacks 00:03:03.426 LINK reactor_perf 00:03:03.426 LINK event_perf 00:03:03.426 LINK hello_sock 00:03:03.426 CC test/nvme/overhead/overhead.o 00:03:03.427 CC test/nvme/simple_copy/simple_copy.o 00:03:03.427 LINK reactor 00:03:03.427 CC test/nvme/startup/startup.o 00:03:03.427 CC test/nvme/sgl/sgl.o 00:03:03.427 LINK app_repeat 00:03:03.427 CC test/nvme/e2edp/nvme_dp.o 00:03:03.427 CC test/nvme/cuse/cuse.o 00:03:03.427 CC test/nvme/fdp/fdp.o 00:03:03.427 CC test/nvme/aer/aer.o 00:03:03.427 CC test/nvme/reset/reset.o 00:03:03.427 CC test/nvme/err_injection/err_injection.o 00:03:03.427 CC test/nvme/reserve/reserve.o 00:03:03.427 CC test/nvme/fused_ordering/fused_ordering.o 00:03:03.427 CC test/nvme/compliance/nvme_compliance.o 00:03:03.427 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:03.427 CC test/nvme/connect_stress/connect_stress.o 00:03:03.427 LINK idxd_perf 00:03:03.427 CC test/nvme/boot_partition/boot_partition.o 00:03:03.427 CC test/accel/dif/dif.o 00:03:03.427 CC test/blobfs/mkfs/mkfs.o 00:03:03.427 LINK thread 00:03:03.427 LINK vhost 00:03:03.427 CC test/lvol/esnap/esnap.o 00:03:03.427 LINK scheduler 00:03:03.685 LINK startup 00:03:03.685 LINK connect_stress 00:03:03.685 LINK boot_partition 00:03:03.685 LINK err_injection 00:03:03.685 LINK fused_ordering 00:03:03.685 LINK memory_ut 00:03:03.685 LINK doorbell_aers 00:03:03.685 LINK simple_copy 00:03:03.685 LINK reserve 00:03:03.685 LINK mkfs 00:03:03.685 LINK sgl 00:03:03.685 LINK overhead 00:03:03.685 LINK aer 00:03:03.685 LINK nvme_dp 00:03:03.685 LINK reset 00:03:03.685 LINK nvme_compliance 00:03:03.685 LINK fdp 00:03:03.943 CC examples/nvme/abort/abort.o 00:03:03.943 CC examples/nvme/reconnect/reconnect.o 00:03:03.943 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:03.943 CC examples/nvme/hotplug/hotplug.o 00:03:03.943 CC examples/nvme/hello_world/hello_world.o 00:03:03.943 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:03.944 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:03.944 CC examples/nvme/arbitration/arbitration.o 00:03:03.944 LINK dif 00:03:03.944 CC examples/accel/perf/accel_perf.o 00:03:03.944 CC examples/blob/hello_world/hello_blob.o 00:03:03.944 CC examples/blob/cli/blobcli.o 00:03:03.944 LINK cmb_copy 00:03:03.944 LINK pmr_persistence 00:03:03.944 LINK hello_world 00:03:03.944 LINK hotplug 00:03:04.202 LINK iscsi_fuzz 00:03:04.202 LINK reconnect 00:03:04.202 LINK arbitration 00:03:04.202 LINK abort 00:03:04.202 LINK hello_blob 00:03:04.202 LINK nvme_manage 00:03:04.202 LINK accel_perf 00:03:04.460 CC test/bdev/bdevio/bdevio.o 00:03:04.460 LINK blobcli 00:03:04.460 LINK cuse 00:03:04.718 LINK bdevio 00:03:04.718 CC examples/bdev/bdevperf/bdevperf.o 00:03:04.718 CC examples/bdev/hello_world/hello_bdev.o 00:03:04.976 LINK hello_bdev 00:03:05.234 LINK bdevperf 00:03:05.802 CC examples/nvmf/nvmf/nvmf.o 00:03:06.060 LINK nvmf 00:03:06.997 LINK esnap 00:03:07.257 00:03:07.257 real 0m45.030s 00:03:07.257 user 6m37.333s 00:03:07.257 sys 3m22.704s 00:03:07.257 09:51:52 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:07.257 09:51:52 make -- common/autotest_common.sh@10 -- $ set +x 00:03:07.257 ************************************ 00:03:07.257 END TEST make 00:03:07.257 ************************************ 00:03:07.257 09:51:52 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:07.257 09:51:52 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:07.257 09:51:52 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:07.257 09:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.257 09:51:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:07.257 09:51:52 -- pm/common@44 -- $ pid=2275539 00:03:07.257 09:51:52 -- pm/common@50 -- $ kill -TERM 2275539 00:03:07.257 09:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.257 09:51:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:07.257 09:51:52 -- pm/common@44 -- $ pid=2275541 00:03:07.257 09:51:52 -- pm/common@50 -- $ kill -TERM 2275541 00:03:07.257 09:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.257 09:51:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:07.257 09:51:52 -- pm/common@44 -- $ pid=2275543 00:03:07.257 09:51:52 -- pm/common@50 -- $ kill -TERM 2275543 00:03:07.257 09:51:52 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.257 09:51:52 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:07.257 09:51:52 -- pm/common@44 -- $ pid=2275566 00:03:07.257 09:51:52 -- pm/common@50 -- $ sudo -E kill -TERM 2275566 00:03:07.257 09:51:52 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:07.257 09:51:52 -- nvmf/common.sh@7 -- # uname -s 00:03:07.517 09:51:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:07.517 09:51:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:07.517 09:51:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:07.517 09:51:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:07.517 09:51:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:07.517 09:51:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:07.517 09:51:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:07.517 09:51:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:07.517 09:51:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:07.517 09:51:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:07.517 09:51:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:07.517 09:51:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:07.517 09:51:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:07.517 09:51:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:07.517 09:51:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:07.517 09:51:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:07.517 09:51:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:07.517 09:51:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:07.517 09:51:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:07.517 09:51:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:07.517 09:51:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.517 09:51:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.517 09:51:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.517 09:51:52 -- paths/export.sh@5 -- # export PATH 00:03:07.517 09:51:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:07.517 09:51:52 -- nvmf/common.sh@47 -- # : 0 00:03:07.517 09:51:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:07.517 09:51:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:07.517 09:51:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:07.517 09:51:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:07.517 09:51:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:07.517 09:51:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:07.517 09:51:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:07.517 09:51:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:07.517 09:51:52 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:07.517 09:51:52 -- spdk/autotest.sh@32 -- # uname -s 00:03:07.517 09:51:52 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:07.517 09:51:52 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:07.517 09:51:52 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:07.517 09:51:52 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:07.517 09:51:52 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:07.517 09:51:52 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:07.517 09:51:52 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:07.517 09:51:52 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:07.517 09:51:52 -- spdk/autotest.sh@48 -- # udevadm_pid=2334561 00:03:07.517 09:51:52 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:07.517 09:51:52 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:07.517 09:51:52 -- pm/common@17 -- # local monitor 00:03:07.517 09:51:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.517 09:51:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.517 09:51:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.517 09:51:52 -- pm/common@21 -- # date +%s 00:03:07.517 09:51:52 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:07.517 09:51:52 -- pm/common@21 -- # date +%s 00:03:07.517 09:51:52 -- pm/common@25 -- # sleep 1 00:03:07.517 09:51:52 -- pm/common@21 -- # date +%s 00:03:07.517 09:51:52 -- pm/common@21 -- # date +%s 00:03:07.517 09:51:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893912 00:03:07.517 09:51:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893912 00:03:07.517 09:51:52 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893912 00:03:07.517 09:51:52 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893912 00:03:07.517 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893912_collect-vmstat.pm.log 00:03:07.517 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893912_collect-cpu-load.pm.log 00:03:07.517 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893912_collect-cpu-temp.pm.log 00:03:07.517 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893912_collect-bmc-pm.bmc.pm.log 00:03:08.454 09:51:53 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:08.454 09:51:53 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:08.454 09:51:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:08.454 09:51:53 -- common/autotest_common.sh@10 -- # set +x 00:03:08.454 09:51:53 -- spdk/autotest.sh@59 -- # create_test_list 00:03:08.454 09:51:53 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:08.454 09:51:53 -- common/autotest_common.sh@10 -- # set +x 00:03:08.454 09:51:53 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:08.454 09:51:53 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:08.454 09:51:53 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:08.454 09:51:53 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:08.454 09:51:53 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:08.454 09:51:53 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:08.454 09:51:53 -- common/autotest_common.sh@1455 -- # uname 00:03:08.454 09:51:53 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:08.454 09:51:53 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:08.454 09:51:53 -- common/autotest_common.sh@1475 -- # uname 00:03:08.454 09:51:53 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:08.454 09:51:53 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:08.454 09:51:53 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:08.454 09:51:53 -- spdk/autotest.sh@72 -- # hash lcov 00:03:08.454 09:51:53 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:08.454 09:51:53 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:08.454 --rc lcov_branch_coverage=1 00:03:08.454 --rc lcov_function_coverage=1 00:03:08.454 --rc genhtml_branch_coverage=1 00:03:08.454 --rc genhtml_function_coverage=1 00:03:08.454 --rc genhtml_legend=1 00:03:08.454 --rc geninfo_all_blocks=1 00:03:08.454 ' 00:03:08.454 09:51:53 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:08.454 --rc lcov_branch_coverage=1 00:03:08.454 --rc lcov_function_coverage=1 00:03:08.454 --rc genhtml_branch_coverage=1 00:03:08.454 --rc genhtml_function_coverage=1 00:03:08.454 --rc genhtml_legend=1 00:03:08.454 --rc geninfo_all_blocks=1 00:03:08.454 ' 00:03:08.454 09:51:53 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:08.454 --rc lcov_branch_coverage=1 00:03:08.454 --rc lcov_function_coverage=1 00:03:08.454 --rc genhtml_branch_coverage=1 00:03:08.454 --rc genhtml_function_coverage=1 00:03:08.454 --rc genhtml_legend=1 00:03:08.454 --rc geninfo_all_blocks=1 00:03:08.454 --no-external' 00:03:08.454 09:51:53 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:08.454 --rc lcov_branch_coverage=1 00:03:08.454 --rc lcov_function_coverage=1 00:03:08.454 --rc genhtml_branch_coverage=1 00:03:08.454 --rc genhtml_function_coverage=1 00:03:08.454 --rc genhtml_legend=1 00:03:08.454 --rc geninfo_all_blocks=1 00:03:08.454 --no-external' 00:03:08.454 09:51:53 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:08.454 lcov: LCOV version 1.14 00:03:08.454 09:51:53 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:20.650 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:20.650 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:28.788 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:28.788 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:29.048 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:29.048 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:29.308 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:29.308 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:29.309 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:29.309 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:31.840 09:52:16 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:31.840 09:52:16 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:31.840 09:52:16 -- common/autotest_common.sh@10 -- # set +x 00:03:31.840 09:52:16 -- spdk/autotest.sh@91 -- # rm -f 00:03:31.840 09:52:16 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:35.129 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:35.129 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:35.129 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:35.129 09:52:20 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:35.129 09:52:20 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:35.129 09:52:20 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:35.129 09:52:20 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:35.129 09:52:20 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:35.129 09:52:20 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:35.129 09:52:20 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:35.129 09:52:20 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:35.129 09:52:20 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:35.129 09:52:20 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:35.129 09:52:20 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:35.129 09:52:20 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:35.129 09:52:20 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:35.129 09:52:20 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:35.129 09:52:20 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:35.129 No valid GPT data, bailing 00:03:35.129 09:52:20 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:35.129 09:52:20 -- scripts/common.sh@391 -- # pt= 00:03:35.129 09:52:20 -- scripts/common.sh@392 -- # return 1 00:03:35.129 09:52:20 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:35.129 1+0 records in 00:03:35.129 1+0 records out 00:03:35.129 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00632043 s, 166 MB/s 00:03:35.129 09:52:20 -- spdk/autotest.sh@118 -- # sync 00:03:35.129 09:52:20 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:35.129 09:52:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:35.129 09:52:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:40.403 09:52:25 -- spdk/autotest.sh@124 -- # uname -s 00:03:40.403 09:52:25 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:40.403 09:52:25 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:40.403 09:52:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.403 09:52:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.403 09:52:25 -- common/autotest_common.sh@10 -- # set +x 00:03:40.403 ************************************ 00:03:40.403 START TEST setup.sh 00:03:40.403 ************************************ 00:03:40.403 09:52:25 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:40.403 * Looking for test storage... 00:03:40.403 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:40.403 09:52:25 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:40.403 09:52:25 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:40.403 09:52:25 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:40.403 09:52:25 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:40.403 09:52:25 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:40.403 09:52:25 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.403 ************************************ 00:03:40.403 START TEST acl 00:03:40.403 ************************************ 00:03:40.403 09:52:25 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:40.662 * Looking for test storage... 00:03:40.662 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:40.662 09:52:25 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:40.662 09:52:25 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:40.662 09:52:25 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.662 09:52:25 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:43.952 09:52:28 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:43.952 09:52:28 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:43.952 09:52:28 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:43.952 09:52:28 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:43.952 09:52:28 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:43.952 09:52:28 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:03:46.488 Hugepages 00:03:46.488 node hugesize free / total 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 00:03:46.488 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:46.488 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.489 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:46.747 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:46.748 09:52:31 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:46.748 09:52:31 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:46.748 09:52:31 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:46.748 09:52:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.748 ************************************ 00:03:46.748 START TEST denied 00:03:46.748 ************************************ 00:03:46.748 09:52:31 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:46.748 09:52:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:46.748 09:52:31 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:46.748 09:52:31 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:46.748 09:52:31 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.748 09:52:31 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:50.040 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:50.040 09:52:34 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:54.230 00:03:54.230 real 0m7.094s 00:03:54.230 user 0m2.350s 00:03:54.230 sys 0m4.021s 00:03:54.230 09:52:38 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:54.230 09:52:38 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:54.230 ************************************ 00:03:54.230 END TEST denied 00:03:54.230 ************************************ 00:03:54.230 09:52:38 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:54.230 09:52:38 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:54.230 09:52:38 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:54.230 09:52:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:54.230 ************************************ 00:03:54.230 START TEST allowed 00:03:54.230 ************************************ 00:03:54.230 09:52:38 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:54.230 09:52:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:54.230 09:52:38 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:54.230 09:52:38 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:54.230 09:52:38 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.230 09:52:38 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:03:58.468 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:58.468 09:52:43 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:58.468 09:52:43 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:58.468 09:52:43 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:58.468 09:52:43 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:58.468 09:52:43 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:01.761 00:04:01.761 real 0m7.676s 00:04:01.761 user 0m2.232s 00:04:01.761 sys 0m3.999s 00:04:01.761 09:52:46 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.761 09:52:46 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:01.761 ************************************ 00:04:01.761 END TEST allowed 00:04:01.761 ************************************ 00:04:01.761 00:04:01.761 real 0m21.076s 00:04:01.761 user 0m6.980s 00:04:01.761 sys 0m12.142s 00:04:01.761 09:52:46 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.761 09:52:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:01.761 ************************************ 00:04:01.761 END TEST acl 00:04:01.761 ************************************ 00:04:01.761 09:52:46 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.761 09:52:46 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.761 09:52:46 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.761 09:52:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:01.761 ************************************ 00:04:01.761 START TEST hugepages 00:04:01.761 ************************************ 00:04:01.761 09:52:46 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:01.761 * Looking for test storage... 00:04:01.761 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 173108160 kB' 'MemAvailable: 175979812 kB' 'Buffers: 4132 kB' 'Cached: 10105780 kB' 'SwapCached: 0 kB' 'Active: 7155852 kB' 'Inactive: 3509668 kB' 'Active(anon): 6765072 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558932 kB' 'Mapped: 205520 kB' 'Shmem: 6209464 kB' 'KReclaimable: 231312 kB' 'Slab: 822092 kB' 'SReclaimable: 231312 kB' 'SUnreclaim: 590780 kB' 'KernelStack: 20640 kB' 'PageTables: 8876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982044 kB' 'Committed_AS: 8260412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.761 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.762 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:01.763 09:52:46 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:01.763 09:52:46 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.763 09:52:46 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.763 09:52:46 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:01.763 ************************************ 00:04:01.763 START TEST default_setup 00:04:01.763 ************************************ 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:01.763 09:52:46 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:05.052 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:05.052 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:05.989 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175267972 kB' 'MemAvailable: 178139544 kB' 'Buffers: 4132 kB' 'Cached: 10105896 kB' 'SwapCached: 0 kB' 'Active: 7170484 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779704 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573564 kB' 'Mapped: 205316 kB' 'Shmem: 6209580 kB' 'KReclaimable: 231152 kB' 'Slab: 820292 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589140 kB' 'KernelStack: 20800 kB' 'PageTables: 9616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8279192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316108 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.253 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.254 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175265936 kB' 'MemAvailable: 178137508 kB' 'Buffers: 4132 kB' 'Cached: 10105896 kB' 'SwapCached: 0 kB' 'Active: 7169924 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779144 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572912 kB' 'Mapped: 205272 kB' 'Shmem: 6209580 kB' 'KReclaimable: 231152 kB' 'Slab: 820364 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589212 kB' 'KernelStack: 20752 kB' 'PageTables: 8844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8279208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316028 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.255 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.256 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175265344 kB' 'MemAvailable: 178136916 kB' 'Buffers: 4132 kB' 'Cached: 10105916 kB' 'SwapCached: 0 kB' 'Active: 7170496 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779716 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573500 kB' 'Mapped: 205272 kB' 'Shmem: 6209600 kB' 'KReclaimable: 231152 kB' 'Slab: 820372 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589220 kB' 'KernelStack: 20672 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8277740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316060 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.257 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.258 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:06.259 nr_hugepages=1024 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.259 resv_hugepages=0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.259 surplus_hugepages=0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.259 anon_hugepages=0 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175263212 kB' 'MemAvailable: 178134784 kB' 'Buffers: 4132 kB' 'Cached: 10105936 kB' 'SwapCached: 0 kB' 'Active: 7170064 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779284 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573092 kB' 'Mapped: 205272 kB' 'Shmem: 6209620 kB' 'KReclaimable: 231152 kB' 'Slab: 820372 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589220 kB' 'KernelStack: 20672 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8279252 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316028 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.259 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.260 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 91613532 kB' 'MemUsed: 6049152 kB' 'SwapCached: 0 kB' 'Active: 2333012 kB' 'Inactive: 147500 kB' 'Active(anon): 2062236 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2045744 kB' 'Mapped: 111504 kB' 'AnonPages: 438020 kB' 'Shmem: 1627468 kB' 'KernelStack: 11896 kB' 'PageTables: 5656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 386012 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277336 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.261 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.262 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:06.522 node0=1024 expecting 1024 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:06.522 00:04:06.522 real 0m4.557s 00:04:06.522 user 0m1.312s 00:04:06.522 sys 0m1.971s 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.522 09:52:51 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:06.522 ************************************ 00:04:06.522 END TEST default_setup 00:04:06.522 ************************************ 00:04:06.522 09:52:51 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:06.522 09:52:51 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.522 09:52:51 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.522 09:52:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.522 ************************************ 00:04:06.522 START TEST per_node_1G_alloc 00:04:06.522 ************************************ 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:06.522 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.523 09:52:51 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:09.059 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:09.059 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:09.059 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:09.321 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:09.321 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:09.321 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.321 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175249844 kB' 'MemAvailable: 178121416 kB' 'Buffers: 4132 kB' 'Cached: 10106036 kB' 'SwapCached: 0 kB' 'Active: 7171016 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780236 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573316 kB' 'Mapped: 205344 kB' 'Shmem: 6209720 kB' 'KReclaimable: 231152 kB' 'Slab: 820800 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589648 kB' 'KernelStack: 20624 kB' 'PageTables: 9072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8277248 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316012 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.322 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.323 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175250120 kB' 'MemAvailable: 178121692 kB' 'Buffers: 4132 kB' 'Cached: 10106040 kB' 'SwapCached: 0 kB' 'Active: 7171220 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780440 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573572 kB' 'Mapped: 205304 kB' 'Shmem: 6209724 kB' 'KReclaimable: 231152 kB' 'Slab: 820800 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589648 kB' 'KernelStack: 20624 kB' 'PageTables: 9064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8277264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.324 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.325 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175250872 kB' 'MemAvailable: 178122444 kB' 'Buffers: 4132 kB' 'Cached: 10106060 kB' 'SwapCached: 0 kB' 'Active: 7170588 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779808 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573348 kB' 'Mapped: 205224 kB' 'Shmem: 6209744 kB' 'KReclaimable: 231152 kB' 'Slab: 820760 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589608 kB' 'KernelStack: 20608 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8277288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.326 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.327 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:09.328 nr_hugepages=1024 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:09.328 resv_hugepages=0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:09.328 surplus_hugepages=0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:09.328 anon_hugepages=0 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.328 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175251132 kB' 'MemAvailable: 178122704 kB' 'Buffers: 4132 kB' 'Cached: 10106080 kB' 'SwapCached: 0 kB' 'Active: 7170600 kB' 'Inactive: 3509668 kB' 'Active(anon): 6779820 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573348 kB' 'Mapped: 205224 kB' 'Shmem: 6209764 kB' 'KReclaimable: 231152 kB' 'Slab: 820760 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589608 kB' 'KernelStack: 20608 kB' 'PageTables: 8992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8277312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.329 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.591 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:09.592 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92646292 kB' 'MemUsed: 5016392 kB' 'SwapCached: 0 kB' 'Active: 2332020 kB' 'Inactive: 147500 kB' 'Active(anon): 2061244 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2045752 kB' 'Mapped: 111460 kB' 'AnonPages: 436856 kB' 'Shmem: 1627476 kB' 'KernelStack: 11672 kB' 'PageTables: 5328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 386376 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.593 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718500 kB' 'MemFree: 82606192 kB' 'MemUsed: 11112308 kB' 'SwapCached: 0 kB' 'Active: 4838588 kB' 'Inactive: 3362168 kB' 'Active(anon): 4718584 kB' 'Inactive(anon): 0 kB' 'Active(file): 120004 kB' 'Inactive(file): 3362168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8064508 kB' 'Mapped: 93764 kB' 'AnonPages: 136456 kB' 'Shmem: 4582336 kB' 'KernelStack: 8920 kB' 'PageTables: 3620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122476 kB' 'Slab: 434344 kB' 'SReclaimable: 122476 kB' 'SUnreclaim: 311868 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.594 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:09.595 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:09.596 node0=512 expecting 512 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:09.596 node1=512 expecting 512 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:09.596 00:04:09.596 real 0m3.059s 00:04:09.596 user 0m1.268s 00:04:09.596 sys 0m1.860s 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:09.596 09:52:54 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:09.596 ************************************ 00:04:09.596 END TEST per_node_1G_alloc 00:04:09.596 ************************************ 00:04:09.596 09:52:54 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:09.596 09:52:54 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:09.596 09:52:54 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:09.596 09:52:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:09.596 ************************************ 00:04:09.596 START TEST even_2G_alloc 00:04:09.596 ************************************ 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:09.596 09:52:54 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:12.892 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:12.892 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:12.892 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.892 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175248616 kB' 'MemAvailable: 178120188 kB' 'Buffers: 4132 kB' 'Cached: 10106188 kB' 'SwapCached: 0 kB' 'Active: 7169580 kB' 'Inactive: 3509668 kB' 'Active(anon): 6778800 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571692 kB' 'Mapped: 204448 kB' 'Shmem: 6209872 kB' 'KReclaimable: 231152 kB' 'Slab: 820868 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589716 kB' 'KernelStack: 20592 kB' 'PageTables: 8892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8266364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316012 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.893 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175249324 kB' 'MemAvailable: 178120896 kB' 'Buffers: 4132 kB' 'Cached: 10106188 kB' 'SwapCached: 0 kB' 'Active: 7168336 kB' 'Inactive: 3509668 kB' 'Active(anon): 6777556 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571492 kB' 'Mapped: 204332 kB' 'Shmem: 6209872 kB' 'KReclaimable: 231152 kB' 'Slab: 820824 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589672 kB' 'KernelStack: 20576 kB' 'PageTables: 8860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8266012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.894 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.895 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175249420 kB' 'MemAvailable: 178120992 kB' 'Buffers: 4132 kB' 'Cached: 10106208 kB' 'SwapCached: 0 kB' 'Active: 7168688 kB' 'Inactive: 3509668 kB' 'Active(anon): 6777908 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571244 kB' 'Mapped: 204332 kB' 'Shmem: 6209892 kB' 'KReclaimable: 231152 kB' 'Slab: 820816 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589664 kB' 'KernelStack: 20512 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8266168 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315932 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.896 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.897 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:12.898 nr_hugepages=1024 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:12.898 resv_hugepages=0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:12.898 surplus_hugepages=0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:12.898 anon_hugepages=0 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175249676 kB' 'MemAvailable: 178121248 kB' 'Buffers: 4132 kB' 'Cached: 10106228 kB' 'SwapCached: 0 kB' 'Active: 7168708 kB' 'Inactive: 3509668 kB' 'Active(anon): 6777928 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571244 kB' 'Mapped: 204332 kB' 'Shmem: 6209912 kB' 'KReclaimable: 231152 kB' 'Slab: 820808 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589656 kB' 'KernelStack: 20512 kB' 'PageTables: 8584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8266192 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315932 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.898 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.899 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92657580 kB' 'MemUsed: 5005104 kB' 'SwapCached: 0 kB' 'Active: 2330336 kB' 'Inactive: 147500 kB' 'Active(anon): 2059560 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2045756 kB' 'Mapped: 110948 kB' 'AnonPages: 435192 kB' 'Shmem: 1627480 kB' 'KernelStack: 11688 kB' 'PageTables: 5344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 386308 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277632 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.900 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:12.901 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718500 kB' 'MemFree: 82591832 kB' 'MemUsed: 11126668 kB' 'SwapCached: 0 kB' 'Active: 4838676 kB' 'Inactive: 3362168 kB' 'Active(anon): 4718672 kB' 'Inactive(anon): 0 kB' 'Active(file): 120004 kB' 'Inactive(file): 3362168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8064660 kB' 'Mapped: 93384 kB' 'AnonPages: 136340 kB' 'Shmem: 4582488 kB' 'KernelStack: 8872 kB' 'PageTables: 3420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122476 kB' 'Slab: 434500 kB' 'SReclaimable: 122476 kB' 'SUnreclaim: 312024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.902 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:12.903 node0=512 expecting 512 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:12.903 node1=512 expecting 512 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:12.903 00:04:12.903 real 0m3.067s 00:04:12.903 user 0m1.237s 00:04:12.903 sys 0m1.900s 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:12.903 09:52:57 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:12.903 ************************************ 00:04:12.903 END TEST even_2G_alloc 00:04:12.903 ************************************ 00:04:12.903 09:52:57 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:12.903 09:52:57 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:12.903 09:52:57 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:12.903 09:52:57 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:12.903 ************************************ 00:04:12.903 START TEST odd_alloc 00:04:12.903 ************************************ 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.903 09:52:57 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:15.440 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:15.440 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:15.440 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:15.440 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:15.440 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.440 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.440 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175255320 kB' 'MemAvailable: 178126892 kB' 'Buffers: 4132 kB' 'Cached: 10106344 kB' 'SwapCached: 0 kB' 'Active: 7171332 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780552 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574236 kB' 'Mapped: 204352 kB' 'Shmem: 6210028 kB' 'KReclaimable: 231152 kB' 'Slab: 820504 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589352 kB' 'KernelStack: 20576 kB' 'PageTables: 8752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029596 kB' 'Committed_AS: 8267208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.441 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.706 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175255864 kB' 'MemAvailable: 178127436 kB' 'Buffers: 4132 kB' 'Cached: 10106344 kB' 'SwapCached: 0 kB' 'Active: 7171512 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780732 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574400 kB' 'Mapped: 204352 kB' 'Shmem: 6210028 kB' 'KReclaimable: 231152 kB' 'Slab: 820556 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589404 kB' 'KernelStack: 20560 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029596 kB' 'Committed_AS: 8267224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.707 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.708 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175256516 kB' 'MemAvailable: 178128088 kB' 'Buffers: 4132 kB' 'Cached: 10106364 kB' 'SwapCached: 0 kB' 'Active: 7171640 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780860 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574436 kB' 'Mapped: 204352 kB' 'Shmem: 6210048 kB' 'KReclaimable: 231152 kB' 'Slab: 820556 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589404 kB' 'KernelStack: 20576 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029596 kB' 'Committed_AS: 8267244 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.709 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.710 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.711 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:15.712 nr_hugepages=1025 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.712 resv_hugepages=0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.712 surplus_hugepages=0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.712 anon_hugepages=0 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175256520 kB' 'MemAvailable: 178128092 kB' 'Buffers: 4132 kB' 'Cached: 10106388 kB' 'SwapCached: 0 kB' 'Active: 7171700 kB' 'Inactive: 3509668 kB' 'Active(anon): 6780920 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574432 kB' 'Mapped: 204352 kB' 'Shmem: 6210072 kB' 'KReclaimable: 231152 kB' 'Slab: 820556 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589404 kB' 'KernelStack: 20576 kB' 'PageTables: 8756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029596 kB' 'Committed_AS: 8267264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315980 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.712 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.713 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92654188 kB' 'MemUsed: 5008496 kB' 'SwapCached: 0 kB' 'Active: 2331732 kB' 'Inactive: 147500 kB' 'Active(anon): 2060956 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2045832 kB' 'Mapped: 110952 kB' 'AnonPages: 436864 kB' 'Shmem: 1627556 kB' 'KernelStack: 11720 kB' 'PageTables: 5384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 386236 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277560 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.714 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718500 kB' 'MemFree: 82602332 kB' 'MemUsed: 11116168 kB' 'SwapCached: 0 kB' 'Active: 4840088 kB' 'Inactive: 3362168 kB' 'Active(anon): 4720084 kB' 'Inactive(anon): 0 kB' 'Active(file): 120004 kB' 'Inactive(file): 3362168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8064708 kB' 'Mapped: 93400 kB' 'AnonPages: 137724 kB' 'Shmem: 4582536 kB' 'KernelStack: 8872 kB' 'PageTables: 3412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122476 kB' 'Slab: 434320 kB' 'SReclaimable: 122476 kB' 'SUnreclaim: 311844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.715 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.716 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:15.717 node0=512 expecting 513 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:15.717 node1=513 expecting 512 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:15.717 00:04:15.717 real 0m3.059s 00:04:15.717 user 0m1.292s 00:04:15.717 sys 0m1.836s 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:15.717 09:53:00 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.717 ************************************ 00:04:15.717 END TEST odd_alloc 00:04:15.717 ************************************ 00:04:15.717 09:53:00 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:15.717 09:53:00 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:15.717 09:53:00 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:15.717 09:53:00 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.977 ************************************ 00:04:15.977 START TEST custom_alloc 00:04:15.977 ************************************ 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.977 09:53:00 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:18.511 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:18.511 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:18.511 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 174211288 kB' 'MemAvailable: 177082860 kB' 'Buffers: 4132 kB' 'Cached: 10106496 kB' 'SwapCached: 0 kB' 'Active: 7173636 kB' 'Inactive: 3509668 kB' 'Active(anon): 6782856 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575428 kB' 'Mapped: 204448 kB' 'Shmem: 6210180 kB' 'KReclaimable: 231152 kB' 'Slab: 820492 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589340 kB' 'KernelStack: 20544 kB' 'PageTables: 8708 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506332 kB' 'Committed_AS: 8270512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315964 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.775 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.776 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 174212008 kB' 'MemAvailable: 177083580 kB' 'Buffers: 4132 kB' 'Cached: 10106500 kB' 'SwapCached: 0 kB' 'Active: 7172736 kB' 'Inactive: 3509668 kB' 'Active(anon): 6781956 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575628 kB' 'Mapped: 204356 kB' 'Shmem: 6210184 kB' 'KReclaimable: 231152 kB' 'Slab: 820456 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589304 kB' 'KernelStack: 20576 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506332 kB' 'Committed_AS: 8270528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315932 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.777 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.778 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 174215016 kB' 'MemAvailable: 177086588 kB' 'Buffers: 4132 kB' 'Cached: 10106504 kB' 'SwapCached: 0 kB' 'Active: 7172360 kB' 'Inactive: 3509668 kB' 'Active(anon): 6781580 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 574716 kB' 'Mapped: 204356 kB' 'Shmem: 6210188 kB' 'KReclaimable: 231152 kB' 'Slab: 820456 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589304 kB' 'KernelStack: 20560 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506332 kB' 'Committed_AS: 8269176 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315884 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.779 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.780 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:18.781 nr_hugepages=1536 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:18.781 resv_hugepages=0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:18.781 surplus_hugepages=0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:18.781 anon_hugepages=0 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.781 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 174211748 kB' 'MemAvailable: 177083320 kB' 'Buffers: 4132 kB' 'Cached: 10106504 kB' 'SwapCached: 0 kB' 'Active: 7173652 kB' 'Inactive: 3509668 kB' 'Active(anon): 6782872 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576052 kB' 'Mapped: 204356 kB' 'Shmem: 6210188 kB' 'KReclaimable: 231152 kB' 'Slab: 820456 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 589304 kB' 'KernelStack: 20656 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506332 kB' 'Committed_AS: 8270572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 315996 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.782 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.783 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 92640296 kB' 'MemUsed: 5022388 kB' 'SwapCached: 0 kB' 'Active: 2332608 kB' 'Inactive: 147500 kB' 'Active(anon): 2061832 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2045900 kB' 'Mapped: 110948 kB' 'AnonPages: 437036 kB' 'Shmem: 1627624 kB' 'KernelStack: 11688 kB' 'PageTables: 5308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 386368 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.784 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93718500 kB' 'MemFree: 81569960 kB' 'MemUsed: 12148540 kB' 'SwapCached: 0 kB' 'Active: 4840684 kB' 'Inactive: 3362168 kB' 'Active(anon): 4720680 kB' 'Inactive(anon): 0 kB' 'Active(file): 120004 kB' 'Inactive(file): 3362168 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8064740 kB' 'Mapped: 93416 kB' 'AnonPages: 138228 kB' 'Shmem: 4582568 kB' 'KernelStack: 9000 kB' 'PageTables: 3444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 122476 kB' 'Slab: 434092 kB' 'SReclaimable: 122476 kB' 'SUnreclaim: 311616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.785 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:18.786 node0=512 expecting 512 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:18.786 node1=1024 expecting 1024 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:18.786 00:04:18.786 real 0m3.009s 00:04:18.786 user 0m1.193s 00:04:18.786 sys 0m1.882s 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:18.786 09:53:03 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:18.786 ************************************ 00:04:18.786 END TEST custom_alloc 00:04:18.786 ************************************ 00:04:18.786 09:53:03 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:18.786 09:53:03 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:18.786 09:53:03 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:18.786 09:53:03 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:19.046 ************************************ 00:04:19.046 START TEST no_shrink_alloc 00:04:19.046 ************************************ 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:19.046 09:53:03 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:21.607 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:21.607 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:21.607 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.871 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175270380 kB' 'MemAvailable: 178141952 kB' 'Buffers: 4132 kB' 'Cached: 10106652 kB' 'SwapCached: 0 kB' 'Active: 7173912 kB' 'Inactive: 3509668 kB' 'Active(anon): 6783132 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575944 kB' 'Mapped: 204516 kB' 'Shmem: 6210336 kB' 'KReclaimable: 231152 kB' 'Slab: 820108 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588956 kB' 'KernelStack: 20832 kB' 'PageTables: 9956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8271180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316268 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.872 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175268616 kB' 'MemAvailable: 178140188 kB' 'Buffers: 4132 kB' 'Cached: 10106656 kB' 'SwapCached: 0 kB' 'Active: 7174668 kB' 'Inactive: 3509668 kB' 'Active(anon): 6783888 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576808 kB' 'Mapped: 204516 kB' 'Shmem: 6210340 kB' 'KReclaimable: 231152 kB' 'Slab: 820032 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588880 kB' 'KernelStack: 20816 kB' 'PageTables: 9392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8271200 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316156 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.873 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.874 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175267636 kB' 'MemAvailable: 178139208 kB' 'Buffers: 4132 kB' 'Cached: 10106676 kB' 'SwapCached: 0 kB' 'Active: 7175348 kB' 'Inactive: 3509668 kB' 'Active(anon): 6784568 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 577480 kB' 'Mapped: 204948 kB' 'Shmem: 6210360 kB' 'KReclaimable: 231152 kB' 'Slab: 819992 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588840 kB' 'KernelStack: 20976 kB' 'PageTables: 9696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8271616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316092 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.875 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.876 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:21.877 nr_hugepages=1024 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:21.877 resv_hugepages=0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:21.877 surplus_hugepages=0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:21.877 anon_hugepages=0 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175264844 kB' 'MemAvailable: 178136416 kB' 'Buffers: 4132 kB' 'Cached: 10106692 kB' 'SwapCached: 0 kB' 'Active: 7179528 kB' 'Inactive: 3509668 kB' 'Active(anon): 6788748 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 581652 kB' 'Mapped: 205300 kB' 'Shmem: 6210376 kB' 'KReclaimable: 231152 kB' 'Slab: 819984 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588832 kB' 'KernelStack: 20752 kB' 'PageTables: 9300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8274756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316000 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.877 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.878 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 91607272 kB' 'MemUsed: 6055412 kB' 'SwapCached: 0 kB' 'Active: 2339704 kB' 'Inactive: 147500 kB' 'Active(anon): 2068928 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2046100 kB' 'Mapped: 111104 kB' 'AnonPages: 444272 kB' 'Shmem: 1627824 kB' 'KernelStack: 11896 kB' 'PageTables: 5956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 385856 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 277180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.879 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.880 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:21.881 node0=1024 expecting 1024 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.881 09:53:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:25.176 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.176 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:25.176 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:25.176 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.176 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175257652 kB' 'MemAvailable: 178129224 kB' 'Buffers: 4132 kB' 'Cached: 10106784 kB' 'SwapCached: 0 kB' 'Active: 7174736 kB' 'Inactive: 3509668 kB' 'Active(anon): 6783956 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 576224 kB' 'Mapped: 204464 kB' 'Shmem: 6210468 kB' 'KReclaimable: 231152 kB' 'Slab: 819736 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588584 kB' 'KernelStack: 20608 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8269120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316092 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.177 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175257796 kB' 'MemAvailable: 178129368 kB' 'Buffers: 4132 kB' 'Cached: 10106788 kB' 'SwapCached: 0 kB' 'Active: 7173736 kB' 'Inactive: 3509668 kB' 'Active(anon): 6782956 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575732 kB' 'Mapped: 204384 kB' 'Shmem: 6210472 kB' 'KReclaimable: 231152 kB' 'Slab: 819680 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588528 kB' 'KernelStack: 20576 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8269136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316060 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.178 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.179 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175257796 kB' 'MemAvailable: 178129368 kB' 'Buffers: 4132 kB' 'Cached: 10106788 kB' 'SwapCached: 0 kB' 'Active: 7173768 kB' 'Inactive: 3509668 kB' 'Active(anon): 6782988 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575768 kB' 'Mapped: 204384 kB' 'Shmem: 6210472 kB' 'KReclaimable: 231152 kB' 'Slab: 819680 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588528 kB' 'KernelStack: 20592 kB' 'PageTables: 8832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8269160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316060 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.180 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.181 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:25.182 nr_hugepages=1024 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:25.182 resv_hugepages=0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:25.182 surplus_hugepages=0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:25.182 anon_hugepages=0 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:25.182 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381184 kB' 'MemFree: 175257544 kB' 'MemAvailable: 178129116 kB' 'Buffers: 4132 kB' 'Cached: 10106844 kB' 'SwapCached: 0 kB' 'Active: 7173728 kB' 'Inactive: 3509668 kB' 'Active(anon): 6782948 kB' 'Inactive(anon): 0 kB' 'Active(file): 390780 kB' 'Inactive(file): 3509668 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575628 kB' 'Mapped: 204384 kB' 'Shmem: 6210528 kB' 'KReclaimable: 231152 kB' 'Slab: 819680 kB' 'SReclaimable: 231152 kB' 'SUnreclaim: 588528 kB' 'KernelStack: 20560 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030620 kB' 'Committed_AS: 8269184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 316060 kB' 'VmallocChunk: 0 kB' 'Percpu: 70272 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3183572 kB' 'DirectMap2M: 20613120 kB' 'DirectMap1G: 178257920 kB' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.183 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:25.184 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97662684 kB' 'MemFree: 91593588 kB' 'MemUsed: 6069096 kB' 'SwapCached: 0 kB' 'Active: 2333224 kB' 'Inactive: 147500 kB' 'Active(anon): 2062448 kB' 'Inactive(anon): 0 kB' 'Active(file): 270776 kB' 'Inactive(file): 147500 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2046228 kB' 'Mapped: 110948 kB' 'AnonPages: 437644 kB' 'Shmem: 1627952 kB' 'KernelStack: 11736 kB' 'PageTables: 5404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 108676 kB' 'Slab: 385604 kB' 'SReclaimable: 108676 kB' 'SUnreclaim: 276928 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:25.185 node0=1024 expecting 1024 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:25.185 00:04:25.185 real 0m5.969s 00:04:25.185 user 0m2.390s 00:04:25.185 sys 0m3.713s 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.185 09:53:09 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.185 ************************************ 00:04:25.185 END TEST no_shrink_alloc 00:04:25.185 ************************************ 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.185 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:25.186 09:53:09 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:25.186 00:04:25.186 real 0m23.285s 00:04:25.186 user 0m8.943s 00:04:25.186 sys 0m13.516s 00:04:25.186 09:53:09 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:25.186 09:53:09 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.186 ************************************ 00:04:25.186 END TEST hugepages 00:04:25.186 ************************************ 00:04:25.186 09:53:09 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:25.186 09:53:09 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:25.186 09:53:09 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:25.186 09:53:09 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:25.186 ************************************ 00:04:25.186 START TEST driver 00:04:25.186 ************************************ 00:04:25.186 09:53:10 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:25.186 * Looking for test storage... 00:04:25.186 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:25.186 09:53:10 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:25.186 09:53:10 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:25.186 09:53:10 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.383 09:53:14 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:29.383 09:53:14 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:29.383 09:53:14 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:29.383 09:53:14 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.383 ************************************ 00:04:29.383 START TEST guess_driver 00:04:29.383 ************************************ 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:29.383 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:29.383 Looking for driver=vfio-pci 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.383 09:53:14 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:31.922 09:53:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:31.922 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:32.182 09:53:17 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:33.562 09:53:18 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.758 00:04:37.758 real 0m8.443s 00:04:37.758 user 0m2.355s 00:04:37.758 sys 0m4.029s 00:04:37.758 09:53:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.758 09:53:22 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.758 ************************************ 00:04:37.758 END TEST guess_driver 00:04:37.758 ************************************ 00:04:37.758 00:04:37.758 real 0m12.700s 00:04:37.758 user 0m3.550s 00:04:37.758 sys 0m6.283s 00:04:37.759 09:53:22 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.759 09:53:22 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:37.759 ************************************ 00:04:37.759 END TEST driver 00:04:37.759 ************************************ 00:04:37.759 09:53:22 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:37.759 09:53:22 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.759 09:53:22 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.759 09:53:22 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:37.759 ************************************ 00:04:37.759 START TEST devices 00:04:37.759 ************************************ 00:04:37.759 09:53:22 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:04:37.759 * Looking for test storage... 00:04:37.759 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:37.759 09:53:22 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:37.759 09:53:22 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:37.759 09:53:22 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.759 09:53:22 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:41.060 09:53:26 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.060 No valid GPT data, bailing 00:04:41.060 09:53:26 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:41.060 09:53:26 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.060 09:53:26 setup.sh.devices -- setup/common.sh@80 -- # echo 1600321314816 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@204 -- # (( 1600321314816 >= min_disk_size )) 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.060 09:53:26 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.060 ************************************ 00:04:41.060 START TEST nvme_mount 00:04:41.060 ************************************ 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.060 09:53:26 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:41.999 Creating new GPT entries in memory. 00:04:41.999 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:41.999 other utilities. 00:04:41.999 09:53:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:41.999 09:53:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:41.999 09:53:27 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:41.999 09:53:27 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:41.999 09:53:27 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.380 Creating new GPT entries in memory. 00:04:43.380 The operation has completed successfully. 00:04:43.380 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2366542 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.381 09:53:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:45.918 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.918 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:45.918 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:45.918 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:45.919 09:53:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.177 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.177 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:46.437 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:46.437 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:04:46.437 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:46.437 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.437 09:53:31 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:49.023 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.283 09:53:34 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:51.821 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:52.081 09:53:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:52.081 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:52.081 00:04:52.081 real 0m11.033s 00:04:52.081 user 0m3.280s 00:04:52.081 sys 0m5.622s 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:52.081 09:53:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:52.081 ************************************ 00:04:52.081 END TEST nvme_mount 00:04:52.081 ************************************ 00:04:52.081 09:53:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:52.081 09:53:37 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:52.081 09:53:37 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:52.081 09:53:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:52.081 ************************************ 00:04:52.081 START TEST dm_mount 00:04:52.081 ************************************ 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:52.081 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:52.082 09:53:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:53.461 Creating new GPT entries in memory. 00:04:53.461 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:53.461 other utilities. 00:04:53.461 09:53:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:53.461 09:53:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:53.461 09:53:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:53.461 09:53:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:53.461 09:53:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:54.400 Creating new GPT entries in memory. 00:04:54.400 The operation has completed successfully. 00:04:54.400 09:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:54.400 09:53:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:54.400 09:53:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:54.400 09:53:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:54.400 09:53:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:55.338 The operation has completed successfully. 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2370730 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:55.338 09:53:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:57.875 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:58.133 09:53:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.134 09:53:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.134 09:53:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.424 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:05:01.425 09:53:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:01.425 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:01.425 00:05:01.425 real 0m8.860s 00:05:01.425 user 0m2.158s 00:05:01.425 sys 0m3.733s 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.425 09:53:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.425 ************************************ 00:05:01.425 END TEST dm_mount 00:05:01.425 ************************************ 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.425 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:01.425 /dev/nvme0n1: 8 bytes were erased at offset 0x1749a955e00 (gpt): 45 46 49 20 50 41 52 54 00:05:01.425 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:01.425 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.425 09:53:46 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:01.425 00:05:01.425 real 0m23.613s 00:05:01.425 user 0m6.757s 00:05:01.425 sys 0m11.630s 00:05:01.425 09:53:46 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.425 09:53:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.425 ************************************ 00:05:01.425 END TEST devices 00:05:01.425 ************************************ 00:05:01.425 00:05:01.425 real 1m21.049s 00:05:01.425 user 0m26.372s 00:05:01.425 sys 0m43.830s 00:05:01.425 09:53:46 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.425 09:53:46 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:01.425 ************************************ 00:05:01.425 END TEST setup.sh 00:05:01.425 ************************************ 00:05:01.425 09:53:46 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:04.716 Hugepages 00:05:04.716 node hugesize free / total 00:05:04.716 node0 1048576kB 0 / 0 00:05:04.716 node0 2048kB 2048 / 2048 00:05:04.716 node1 1048576kB 0 / 0 00:05:04.716 node1 2048kB 0 / 0 00:05:04.716 00:05:04.716 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:04.716 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:04.716 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:04.716 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:04.716 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:04.716 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:04.716 09:53:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:04.716 09:53:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:04.716 09:53:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:04.716 09:53:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:07.251 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:07.251 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:08.629 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.889 09:53:53 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:09.826 09:53:54 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:09.826 09:53:54 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:09.826 09:53:54 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.826 09:53:54 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:09.826 09:53:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.826 09:53:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.826 09:53:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.826 09:53:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.826 09:53:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.826 09:53:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.826 09:53:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:09.826 09:53:54 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:12.363 Waiting for block devices as requested 00:05:12.661 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:05:12.661 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:12.935 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:12.935 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:12.935 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:12.935 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:13.194 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.194 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:13.194 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.194 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:13.454 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:13.454 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:13.454 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:13.713 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:13.713 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:13.713 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:13.972 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:13.972 09:53:58 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:13.972 09:53:58 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:05:13.972 09:53:58 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:05:13.972 09:53:58 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:13.972 09:53:58 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:13.972 09:53:58 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:13.972 09:53:58 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:13.972 09:53:59 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:05:13.972 09:53:59 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:13.972 09:53:59 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:13.972 09:53:59 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:13.972 09:53:59 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:13.972 09:53:59 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:13.972 09:53:59 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:13.972 09:53:59 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:13.972 09:53:59 -- common/autotest_common.sh@1557 -- # continue 00:05:13.972 09:53:59 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:13.972 09:53:59 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:13.972 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:05:13.972 09:53:59 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:13.972 09:53:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.972 09:53:59 -- common/autotest_common.sh@10 -- # set +x 00:05:13.972 09:53:59 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:17.262 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:17.262 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:18.199 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.458 09:54:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:18.458 09:54:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.458 09:54:03 -- common/autotest_common.sh@10 -- # set +x 00:05:18.458 09:54:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:18.458 09:54:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:18.458 09:54:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.458 09:54:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:18.458 09:54:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:18.458 09:54:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:18.458 09:54:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:18.458 09:54:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:18.458 09:54:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.458 09:54:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.458 09:54:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:18.458 09:54:03 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:18.458 09:54:03 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:05:18.458 09:54:03 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:18.458 09:54:03 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:05:18.458 09:54:03 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:18.458 09:54:03 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:18.458 09:54:03 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:18.458 09:54:03 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:05:18.458 09:54:03 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:05:18.458 09:54:03 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2379682 00:05:18.458 09:54:03 -- common/autotest_common.sh@1598 -- # waitforlisten 2379682 00:05:18.458 09:54:03 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.458 09:54:03 -- common/autotest_common.sh@831 -- # '[' -z 2379682 ']' 00:05:18.458 09:54:03 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.458 09:54:03 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.458 09:54:03 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.458 09:54:03 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.458 09:54:03 -- common/autotest_common.sh@10 -- # set +x 00:05:18.716 [2024-07-25 09:54:03.652740] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:18.716 [2024-07-25 09:54:03.652790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2379682 ] 00:05:18.716 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.716 [2024-07-25 09:54:03.717358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.716 [2024-07-25 09:54:03.795973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.652 09:54:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.652 09:54:04 -- common/autotest_common.sh@864 -- # return 0 00:05:19.652 09:54:04 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:19.652 09:54:04 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:19.652 09:54:04 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:22.938 nvme0n1 00:05:22.938 09:54:07 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:22.938 [2024-07-25 09:54:07.596798] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:22.938 request: 00:05:22.938 { 00:05:22.938 "nvme_ctrlr_name": "nvme0", 00:05:22.938 "password": "test", 00:05:22.938 "method": "bdev_nvme_opal_revert", 00:05:22.938 "req_id": 1 00:05:22.938 } 00:05:22.938 Got JSON-RPC error response 00:05:22.938 response: 00:05:22.938 { 00:05:22.938 "code": -32602, 00:05:22.938 "message": "Invalid parameters" 00:05:22.938 } 00:05:22.938 09:54:07 -- common/autotest_common.sh@1604 -- # true 00:05:22.938 09:54:07 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:22.938 09:54:07 -- common/autotest_common.sh@1608 -- # killprocess 2379682 00:05:22.938 09:54:07 -- common/autotest_common.sh@950 -- # '[' -z 2379682 ']' 00:05:22.938 09:54:07 -- common/autotest_common.sh@954 -- # kill -0 2379682 00:05:22.938 09:54:07 -- common/autotest_common.sh@955 -- # uname 00:05:22.938 09:54:07 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:22.938 09:54:07 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2379682 00:05:22.938 09:54:07 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:22.938 09:54:07 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:22.938 09:54:07 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2379682' 00:05:22.938 killing process with pid 2379682 00:05:22.938 09:54:07 -- common/autotest_common.sh@969 -- # kill 2379682 00:05:22.938 09:54:07 -- common/autotest_common.sh@974 -- # wait 2379682 00:05:24.839 09:54:09 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:24.839 09:54:09 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:24.839 09:54:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.839 09:54:09 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:24.839 09:54:09 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:24.839 09:54:09 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:24.839 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:05:24.839 09:54:09 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:24.839 09:54:09 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:24.839 09:54:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.839 09:54:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.839 09:54:09 -- common/autotest_common.sh@10 -- # set +x 00:05:24.839 ************************************ 00:05:24.839 START TEST env 00:05:24.839 ************************************ 00:05:24.839 09:54:09 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:05:24.839 * Looking for test storage... 00:05:24.839 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:05:24.839 09:54:09 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.839 09:54:09 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:24.839 09:54:09 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:24.839 09:54:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:24.839 ************************************ 00:05:24.839 START TEST env_memory 00:05:24.839 ************************************ 00:05:24.839 09:54:09 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:05:24.839 00:05:24.839 00:05:24.839 CUnit - A unit testing framework for C - Version 2.1-3 00:05:24.839 http://cunit.sourceforge.net/ 00:05:24.839 00:05:24.839 00:05:24.839 Suite: memory 00:05:25.099 Test: alloc and free memory map ...[2024-07-25 09:54:10.031886] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.099 passed 00:05:25.099 Test: mem map translation ...[2024-07-25 09:54:10.051548] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.099 [2024-07-25 09:54:10.051565] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.099 [2024-07-25 09:54:10.051603] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.099 [2024-07-25 09:54:10.051611] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.099 passed 00:05:25.099 Test: mem map registration ...[2024-07-25 09:54:10.089629] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:25.099 [2024-07-25 09:54:10.089649] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:25.099 passed 00:05:25.099 Test: mem map adjacent registrations ...passed 00:05:25.099 00:05:25.099 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.099 suites 1 1 n/a 0 0 00:05:25.099 tests 4 4 4 0 0 00:05:25.099 asserts 152 152 152 0 n/a 00:05:25.099 00:05:25.099 Elapsed time = 0.139 seconds 00:05:25.099 00:05:25.099 real 0m0.151s 00:05:25.099 user 0m0.139s 00:05:25.099 sys 0m0.011s 00:05:25.099 09:54:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.099 09:54:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.099 ************************************ 00:05:25.099 END TEST env_memory 00:05:25.099 ************************************ 00:05:25.099 09:54:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.099 09:54:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.099 09:54:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.099 09:54:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.099 ************************************ 00:05:25.099 START TEST env_vtophys 00:05:25.099 ************************************ 00:05:25.099 09:54:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.099 EAL: lib.eal log level changed from notice to debug 00:05:25.099 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.099 EAL: Detected lcore 1 as core 1 on socket 0 00:05:25.099 EAL: Detected lcore 2 as core 2 on socket 0 00:05:25.099 EAL: Detected lcore 3 as core 3 on socket 0 00:05:25.099 EAL: Detected lcore 4 as core 4 on socket 0 00:05:25.099 EAL: Detected lcore 5 as core 5 on socket 0 00:05:25.099 EAL: Detected lcore 6 as core 6 on socket 0 00:05:25.099 EAL: Detected lcore 7 as core 8 on socket 0 00:05:25.099 EAL: Detected lcore 8 as core 9 on socket 0 00:05:25.099 EAL: Detected lcore 9 as core 10 on socket 0 00:05:25.099 EAL: Detected lcore 10 as core 11 on socket 0 00:05:25.099 EAL: Detected lcore 11 as core 12 on socket 0 00:05:25.099 EAL: Detected lcore 12 as core 13 on socket 0 00:05:25.099 EAL: Detected lcore 13 as core 16 on socket 0 00:05:25.099 EAL: Detected lcore 14 as core 17 on socket 0 00:05:25.099 EAL: Detected lcore 15 as core 18 on socket 0 00:05:25.099 EAL: Detected lcore 16 as core 19 on socket 0 00:05:25.099 EAL: Detected lcore 17 as core 20 on socket 0 00:05:25.099 EAL: Detected lcore 18 as core 21 on socket 0 00:05:25.099 EAL: Detected lcore 19 as core 25 on socket 0 00:05:25.099 EAL: Detected lcore 20 as core 26 on socket 0 00:05:25.099 EAL: Detected lcore 21 as core 27 on socket 0 00:05:25.099 EAL: Detected lcore 22 as core 28 on socket 0 00:05:25.099 EAL: Detected lcore 23 as core 29 on socket 0 00:05:25.099 EAL: Detected lcore 24 as core 0 on socket 1 00:05:25.099 EAL: Detected lcore 25 as core 1 on socket 1 00:05:25.099 EAL: Detected lcore 26 as core 2 on socket 1 00:05:25.099 EAL: Detected lcore 27 as core 3 on socket 1 00:05:25.099 EAL: Detected lcore 28 as core 4 on socket 1 00:05:25.099 EAL: Detected lcore 29 as core 5 on socket 1 00:05:25.099 EAL: Detected lcore 30 as core 6 on socket 1 00:05:25.099 EAL: Detected lcore 31 as core 8 on socket 1 00:05:25.099 EAL: Detected lcore 32 as core 10 on socket 1 00:05:25.099 EAL: Detected lcore 33 as core 11 on socket 1 00:05:25.099 EAL: Detected lcore 34 as core 12 on socket 1 00:05:25.099 EAL: Detected lcore 35 as core 13 on socket 1 00:05:25.099 EAL: Detected lcore 36 as core 16 on socket 1 00:05:25.099 EAL: Detected lcore 37 as core 17 on socket 1 00:05:25.099 EAL: Detected lcore 38 as core 18 on socket 1 00:05:25.099 EAL: Detected lcore 39 as core 19 on socket 1 00:05:25.099 EAL: Detected lcore 40 as core 20 on socket 1 00:05:25.099 EAL: Detected lcore 41 as core 21 on socket 1 00:05:25.099 EAL: Detected lcore 42 as core 24 on socket 1 00:05:25.099 EAL: Detected lcore 43 as core 25 on socket 1 00:05:25.099 EAL: Detected lcore 44 as core 26 on socket 1 00:05:25.099 EAL: Detected lcore 45 as core 27 on socket 1 00:05:25.099 EAL: Detected lcore 46 as core 28 on socket 1 00:05:25.099 EAL: Detected lcore 47 as core 29 on socket 1 00:05:25.099 EAL: Detected lcore 48 as core 0 on socket 0 00:05:25.099 EAL: Detected lcore 49 as core 1 on socket 0 00:05:25.099 EAL: Detected lcore 50 as core 2 on socket 0 00:05:25.099 EAL: Detected lcore 51 as core 3 on socket 0 00:05:25.099 EAL: Detected lcore 52 as core 4 on socket 0 00:05:25.099 EAL: Detected lcore 53 as core 5 on socket 0 00:05:25.099 EAL: Detected lcore 54 as core 6 on socket 0 00:05:25.099 EAL: Detected lcore 55 as core 8 on socket 0 00:05:25.099 EAL: Detected lcore 56 as core 9 on socket 0 00:05:25.099 EAL: Detected lcore 57 as core 10 on socket 0 00:05:25.099 EAL: Detected lcore 58 as core 11 on socket 0 00:05:25.099 EAL: Detected lcore 59 as core 12 on socket 0 00:05:25.099 EAL: Detected lcore 60 as core 13 on socket 0 00:05:25.099 EAL: Detected lcore 61 as core 16 on socket 0 00:05:25.099 EAL: Detected lcore 62 as core 17 on socket 0 00:05:25.099 EAL: Detected lcore 63 as core 18 on socket 0 00:05:25.099 EAL: Detected lcore 64 as core 19 on socket 0 00:05:25.099 EAL: Detected lcore 65 as core 20 on socket 0 00:05:25.099 EAL: Detected lcore 66 as core 21 on socket 0 00:05:25.099 EAL: Detected lcore 67 as core 25 on socket 0 00:05:25.099 EAL: Detected lcore 68 as core 26 on socket 0 00:05:25.099 EAL: Detected lcore 69 as core 27 on socket 0 00:05:25.099 EAL: Detected lcore 70 as core 28 on socket 0 00:05:25.099 EAL: Detected lcore 71 as core 29 on socket 0 00:05:25.099 EAL: Detected lcore 72 as core 0 on socket 1 00:05:25.099 EAL: Detected lcore 73 as core 1 on socket 1 00:05:25.099 EAL: Detected lcore 74 as core 2 on socket 1 00:05:25.099 EAL: Detected lcore 75 as core 3 on socket 1 00:05:25.099 EAL: Detected lcore 76 as core 4 on socket 1 00:05:25.099 EAL: Detected lcore 77 as core 5 on socket 1 00:05:25.099 EAL: Detected lcore 78 as core 6 on socket 1 00:05:25.099 EAL: Detected lcore 79 as core 8 on socket 1 00:05:25.099 EAL: Detected lcore 80 as core 10 on socket 1 00:05:25.099 EAL: Detected lcore 81 as core 11 on socket 1 00:05:25.099 EAL: Detected lcore 82 as core 12 on socket 1 00:05:25.099 EAL: Detected lcore 83 as core 13 on socket 1 00:05:25.099 EAL: Detected lcore 84 as core 16 on socket 1 00:05:25.099 EAL: Detected lcore 85 as core 17 on socket 1 00:05:25.099 EAL: Detected lcore 86 as core 18 on socket 1 00:05:25.099 EAL: Detected lcore 87 as core 19 on socket 1 00:05:25.099 EAL: Detected lcore 88 as core 20 on socket 1 00:05:25.099 EAL: Detected lcore 89 as core 21 on socket 1 00:05:25.099 EAL: Detected lcore 90 as core 24 on socket 1 00:05:25.099 EAL: Detected lcore 91 as core 25 on socket 1 00:05:25.099 EAL: Detected lcore 92 as core 26 on socket 1 00:05:25.099 EAL: Detected lcore 93 as core 27 on socket 1 00:05:25.099 EAL: Detected lcore 94 as core 28 on socket 1 00:05:25.099 EAL: Detected lcore 95 as core 29 on socket 1 00:05:25.099 EAL: Maximum logical cores by configuration: 128 00:05:25.099 EAL: Detected CPU lcores: 96 00:05:25.099 EAL: Detected NUMA nodes: 2 00:05:25.099 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:25.099 EAL: Detected shared linkage of DPDK 00:05:25.099 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.099 EAL: Bus pci wants IOVA as 'DC' 00:05:25.099 EAL: Buses did not request a specific IOVA mode. 00:05:25.099 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:25.099 EAL: Selected IOVA mode 'VA' 00:05:25.099 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.099 EAL: Probing VFIO support... 00:05:25.099 EAL: IOMMU type 1 (Type 1) is supported 00:05:25.099 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:25.099 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:25.099 EAL: VFIO support initialized 00:05:25.099 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.099 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.099 EAL: Setting up physically contiguous memory... 00:05:25.099 EAL: Setting maximum number of open files to 524288 00:05:25.099 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.099 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:25.099 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.099 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.099 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.099 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.099 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.100 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:25.100 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.100 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:25.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.100 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.100 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:25.100 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:25.100 EAL: Hugepages will be freed exactly as allocated. 00:05:25.100 EAL: No shared files mode enabled, IPC is disabled 00:05:25.100 EAL: No shared files mode enabled, IPC is disabled 00:05:25.100 EAL: TSC frequency is ~2100000 KHz 00:05:25.100 EAL: Main lcore 0 is ready (tid=7fc82920da00;cpuset=[0]) 00:05:25.100 EAL: Trying to obtain current memory policy. 00:05:25.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.100 EAL: Restoring previous memory policy: 0 00:05:25.100 EAL: request: mp_malloc_sync 00:05:25.100 EAL: No shared files mode enabled, IPC is disabled 00:05:25.100 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.100 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.359 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.359 00:05:25.359 00:05:25.359 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.359 http://cunit.sourceforge.net/ 00:05:25.359 00:05:25.359 00:05:25.359 Suite: components_suite 00:05:25.359 Test: vtophys_malloc_test ...passed 00:05:25.359 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.359 EAL: Restoring previous memory policy: 4 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.359 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.359 EAL: request: mp_malloc_sync 00:05:25.359 EAL: No shared files mode enabled, IPC is disabled 00:05:25.359 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.359 EAL: Trying to obtain current memory policy. 00:05:25.359 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.618 EAL: Restoring previous memory policy: 4 00:05:25.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.618 EAL: request: mp_malloc_sync 00:05:25.618 EAL: No shared files mode enabled, IPC is disabled 00:05:25.618 EAL: Heap on socket 0 was expanded by 514MB 00:05:25.618 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.619 EAL: request: mp_malloc_sync 00:05:25.619 EAL: No shared files mode enabled, IPC is disabled 00:05:25.619 EAL: Heap on socket 0 was shrunk by 514MB 00:05:25.619 EAL: Trying to obtain current memory policy. 00:05:25.619 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.877 EAL: Restoring previous memory policy: 4 00:05:25.877 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.877 EAL: request: mp_malloc_sync 00:05:25.877 EAL: No shared files mode enabled, IPC is disabled 00:05:25.877 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.136 EAL: request: mp_malloc_sync 00:05:26.136 EAL: No shared files mode enabled, IPC is disabled 00:05:26.136 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.136 passed 00:05:26.136 00:05:26.136 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.136 suites 1 1 n/a 0 0 00:05:26.136 tests 2 2 2 0 0 00:05:26.136 asserts 497 497 497 0 n/a 00:05:26.136 00:05:26.136 Elapsed time = 0.961 seconds 00:05:26.136 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.136 EAL: request: mp_malloc_sync 00:05:26.136 EAL: No shared files mode enabled, IPC is disabled 00:05:26.136 EAL: Heap on socket 0 was shrunk by 2MB 00:05:26.136 EAL: No shared files mode enabled, IPC is disabled 00:05:26.136 EAL: No shared files mode enabled, IPC is disabled 00:05:26.136 EAL: No shared files mode enabled, IPC is disabled 00:05:26.136 00:05:26.136 real 0m1.079s 00:05:26.136 user 0m0.640s 00:05:26.136 sys 0m0.416s 00:05:26.136 09:54:11 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.136 09:54:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.136 ************************************ 00:05:26.136 END TEST env_vtophys 00:05:26.136 ************************************ 00:05:26.395 09:54:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.395 09:54:11 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.395 09:54:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.395 09:54:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.395 ************************************ 00:05:26.395 START TEST env_pci 00:05:26.395 ************************************ 00:05:26.395 09:54:11 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.395 00:05:26.395 00:05:26.395 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.395 http://cunit.sourceforge.net/ 00:05:26.395 00:05:26.395 00:05:26.395 Suite: pci 00:05:26.395 Test: pci_hook ...[2024-07-25 09:54:11.368340] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2381391 has claimed it 00:05:26.395 EAL: Cannot find device (10000:00:01.0) 00:05:26.395 EAL: Failed to attach device on primary process 00:05:26.395 passed 00:05:26.395 00:05:26.395 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.395 suites 1 1 n/a 0 0 00:05:26.395 tests 1 1 1 0 0 00:05:26.395 asserts 25 25 25 0 n/a 00:05:26.395 00:05:26.395 Elapsed time = 0.026 seconds 00:05:26.395 00:05:26.395 real 0m0.045s 00:05:26.395 user 0m0.019s 00:05:26.395 sys 0m0.026s 00:05:26.395 09:54:11 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.395 09:54:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:26.396 ************************************ 00:05:26.396 END TEST env_pci 00:05:26.396 ************************************ 00:05:26.396 09:54:11 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:26.396 09:54:11 env -- env/env.sh@15 -- # uname 00:05:26.396 09:54:11 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:26.396 09:54:11 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:26.396 09:54:11 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.396 09:54:11 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:26.396 09:54:11 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.396 09:54:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.396 ************************************ 00:05:26.396 START TEST env_dpdk_post_init 00:05:26.396 ************************************ 00:05:26.396 09:54:11 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:26.396 EAL: Detected CPU lcores: 96 00:05:26.396 EAL: Detected NUMA nodes: 2 00:05:26.396 EAL: Detected shared linkage of DPDK 00:05:26.396 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:26.396 EAL: Selected IOVA mode 'VA' 00:05:26.396 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.396 EAL: VFIO support initialized 00:05:26.396 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:26.655 EAL: Using IOMMU type 1 (Type 1) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:26.655 EAL: Ignore mapping IO port bar(1) 00:05:26.655 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:27.592 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.592 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:27.592 EAL: Ignore mapping IO port bar(1) 00:05:27.593 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:27.593 EAL: Ignore mapping IO port bar(1) 00:05:27.593 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:30.875 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:30.875 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:31.443 Starting DPDK initialization... 00:05:31.443 Starting SPDK post initialization... 00:05:31.443 SPDK NVMe probe 00:05:31.443 Attaching to 0000:5e:00.0 00:05:31.443 Attached to 0000:5e:00.0 00:05:31.443 Cleaning up... 00:05:31.443 00:05:31.443 real 0m4.919s 00:05:31.443 user 0m3.830s 00:05:31.443 sys 0m0.158s 00:05:31.443 09:54:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.443 09:54:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.443 ************************************ 00:05:31.443 END TEST env_dpdk_post_init 00:05:31.443 ************************************ 00:05:31.443 09:54:16 env -- env/env.sh@26 -- # uname 00:05:31.443 09:54:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.443 09:54:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.443 09:54:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.443 09:54:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.443 09:54:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.443 ************************************ 00:05:31.443 START TEST env_mem_callbacks 00:05:31.443 ************************************ 00:05:31.443 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.443 EAL: Detected CPU lcores: 96 00:05:31.443 EAL: Detected NUMA nodes: 2 00:05:31.443 EAL: Detected shared linkage of DPDK 00:05:31.443 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.443 EAL: Selected IOVA mode 'VA' 00:05:31.443 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.443 EAL: VFIO support initialized 00:05:31.443 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.443 00:05:31.443 00:05:31.443 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.443 http://cunit.sourceforge.net/ 00:05:31.443 00:05:31.443 00:05:31.443 Suite: memory 00:05:31.443 Test: test ... 00:05:31.443 register 0x200000200000 2097152 00:05:31.443 malloc 3145728 00:05:31.444 register 0x200000400000 4194304 00:05:31.444 buf 0x200000500000 len 3145728 PASSED 00:05:31.444 malloc 64 00:05:31.444 buf 0x2000004fff40 len 64 PASSED 00:05:31.444 malloc 4194304 00:05:31.444 register 0x200000800000 6291456 00:05:31.444 buf 0x200000a00000 len 4194304 PASSED 00:05:31.444 free 0x200000500000 3145728 00:05:31.444 free 0x2000004fff40 64 00:05:31.444 unregister 0x200000400000 4194304 PASSED 00:05:31.444 free 0x200000a00000 4194304 00:05:31.444 unregister 0x200000800000 6291456 PASSED 00:05:31.444 malloc 8388608 00:05:31.444 register 0x200000400000 10485760 00:05:31.444 buf 0x200000600000 len 8388608 PASSED 00:05:31.444 free 0x200000600000 8388608 00:05:31.444 unregister 0x200000400000 10485760 PASSED 00:05:31.444 passed 00:05:31.444 00:05:31.444 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.444 suites 1 1 n/a 0 0 00:05:31.444 tests 1 1 1 0 0 00:05:31.444 asserts 15 15 15 0 n/a 00:05:31.444 00:05:31.444 Elapsed time = 0.008 seconds 00:05:31.444 00:05:31.444 real 0m0.057s 00:05:31.444 user 0m0.019s 00:05:31.444 sys 0m0.038s 00:05:31.444 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.444 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.444 ************************************ 00:05:31.444 END TEST env_mem_callbacks 00:05:31.444 ************************************ 00:05:31.444 00:05:31.444 real 0m6.686s 00:05:31.444 user 0m4.827s 00:05:31.444 sys 0m0.934s 00:05:31.444 09:54:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.444 09:54:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.444 ************************************ 00:05:31.444 END TEST env 00:05:31.444 ************************************ 00:05:31.444 09:54:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.444 09:54:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.444 09:54:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.444 09:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.704 ************************************ 00:05:31.704 START TEST rpc 00:05:31.704 ************************************ 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.704 * Looking for test storage... 00:05:31.704 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:31.704 09:54:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2382440 00:05:31.704 09:54:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.704 09:54:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:31.704 09:54:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2382440 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 2382440 ']' 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.704 09:54:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.704 [2024-07-25 09:54:16.757507] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:31.704 [2024-07-25 09:54:16.757549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2382440 ] 00:05:31.704 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.704 [2024-07-25 09:54:16.825561] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.963 [2024-07-25 09:54:16.898398] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:31.963 [2024-07-25 09:54:16.898437] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2382440' to capture a snapshot of events at runtime. 00:05:31.963 [2024-07-25 09:54:16.898444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:31.963 [2024-07-25 09:54:16.898450] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:31.963 [2024-07-25 09:54:16.898454] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2382440 for offline analysis/debug. 00:05:31.963 [2024-07-25 09:54:16.898474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.531 09:54:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.531 09:54:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.531 09:54:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:32.531 09:54:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:32.531 09:54:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.531 09:54:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.531 09:54:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.531 09:54:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.531 09:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.531 ************************************ 00:05:32.531 START TEST rpc_integrity 00:05:32.531 ************************************ 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.531 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.531 { 00:05:32.531 "name": "Malloc0", 00:05:32.531 "aliases": [ 00:05:32.531 "5612392d-d86b-4527-9018-9b15008168bb" 00:05:32.531 ], 00:05:32.531 "product_name": "Malloc disk", 00:05:32.531 "block_size": 512, 00:05:32.531 "num_blocks": 16384, 00:05:32.531 "uuid": "5612392d-d86b-4527-9018-9b15008168bb", 00:05:32.531 "assigned_rate_limits": { 00:05:32.531 "rw_ios_per_sec": 0, 00:05:32.531 "rw_mbytes_per_sec": 0, 00:05:32.531 "r_mbytes_per_sec": 0, 00:05:32.531 "w_mbytes_per_sec": 0 00:05:32.531 }, 00:05:32.531 "claimed": false, 00:05:32.531 "zoned": false, 00:05:32.531 "supported_io_types": { 00:05:32.531 "read": true, 00:05:32.531 "write": true, 00:05:32.531 "unmap": true, 00:05:32.531 "flush": true, 00:05:32.531 "reset": true, 00:05:32.531 "nvme_admin": false, 00:05:32.531 "nvme_io": false, 00:05:32.531 "nvme_io_md": false, 00:05:32.531 "write_zeroes": true, 00:05:32.531 "zcopy": true, 00:05:32.531 "get_zone_info": false, 00:05:32.531 "zone_management": false, 00:05:32.531 "zone_append": false, 00:05:32.531 "compare": false, 00:05:32.531 "compare_and_write": false, 00:05:32.531 "abort": true, 00:05:32.531 "seek_hole": false, 00:05:32.531 "seek_data": false, 00:05:32.531 "copy": true, 00:05:32.531 "nvme_iov_md": false 00:05:32.531 }, 00:05:32.531 "memory_domains": [ 00:05:32.531 { 00:05:32.531 "dma_device_id": "system", 00:05:32.531 "dma_device_type": 1 00:05:32.531 }, 00:05:32.531 { 00:05:32.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.531 "dma_device_type": 2 00:05:32.531 } 00:05:32.531 ], 00:05:32.531 "driver_specific": {} 00:05:32.531 } 00:05:32.531 ]' 00:05:32.531 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.790 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.790 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.790 [2024-07-25 09:54:17.732389] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:32.790 [2024-07-25 09:54:17.732419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.790 [2024-07-25 09:54:17.732429] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188c0d0 00:05:32.790 [2024-07-25 09:54:17.732436] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.790 [2024-07-25 09:54:17.733497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.790 [2024-07-25 09:54:17.733518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.790 Passthru0 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.790 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.790 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.790 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.790 { 00:05:32.790 "name": "Malloc0", 00:05:32.790 "aliases": [ 00:05:32.790 "5612392d-d86b-4527-9018-9b15008168bb" 00:05:32.790 ], 00:05:32.790 "product_name": "Malloc disk", 00:05:32.790 "block_size": 512, 00:05:32.790 "num_blocks": 16384, 00:05:32.790 "uuid": "5612392d-d86b-4527-9018-9b15008168bb", 00:05:32.790 "assigned_rate_limits": { 00:05:32.790 "rw_ios_per_sec": 0, 00:05:32.790 "rw_mbytes_per_sec": 0, 00:05:32.790 "r_mbytes_per_sec": 0, 00:05:32.790 "w_mbytes_per_sec": 0 00:05:32.790 }, 00:05:32.790 "claimed": true, 00:05:32.790 "claim_type": "exclusive_write", 00:05:32.790 "zoned": false, 00:05:32.790 "supported_io_types": { 00:05:32.790 "read": true, 00:05:32.790 "write": true, 00:05:32.790 "unmap": true, 00:05:32.790 "flush": true, 00:05:32.790 "reset": true, 00:05:32.790 "nvme_admin": false, 00:05:32.790 "nvme_io": false, 00:05:32.790 "nvme_io_md": false, 00:05:32.790 "write_zeroes": true, 00:05:32.790 "zcopy": true, 00:05:32.790 "get_zone_info": false, 00:05:32.790 "zone_management": false, 00:05:32.790 "zone_append": false, 00:05:32.790 "compare": false, 00:05:32.790 "compare_and_write": false, 00:05:32.790 "abort": true, 00:05:32.790 "seek_hole": false, 00:05:32.790 "seek_data": false, 00:05:32.790 "copy": true, 00:05:32.790 "nvme_iov_md": false 00:05:32.790 }, 00:05:32.790 "memory_domains": [ 00:05:32.790 { 00:05:32.790 "dma_device_id": "system", 00:05:32.790 "dma_device_type": 1 00:05:32.790 }, 00:05:32.790 { 00:05:32.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.790 "dma_device_type": 2 00:05:32.790 } 00:05:32.790 ], 00:05:32.790 "driver_specific": {} 00:05:32.790 }, 00:05:32.790 { 00:05:32.790 "name": "Passthru0", 00:05:32.790 "aliases": [ 00:05:32.790 "cdbff2ac-cf1b-5f58-91ab-ac21f4f8f05d" 00:05:32.790 ], 00:05:32.790 "product_name": "passthru", 00:05:32.790 "block_size": 512, 00:05:32.790 "num_blocks": 16384, 00:05:32.790 "uuid": "cdbff2ac-cf1b-5f58-91ab-ac21f4f8f05d", 00:05:32.790 "assigned_rate_limits": { 00:05:32.790 "rw_ios_per_sec": 0, 00:05:32.790 "rw_mbytes_per_sec": 0, 00:05:32.790 "r_mbytes_per_sec": 0, 00:05:32.790 "w_mbytes_per_sec": 0 00:05:32.790 }, 00:05:32.790 "claimed": false, 00:05:32.790 "zoned": false, 00:05:32.790 "supported_io_types": { 00:05:32.790 "read": true, 00:05:32.790 "write": true, 00:05:32.790 "unmap": true, 00:05:32.790 "flush": true, 00:05:32.790 "reset": true, 00:05:32.790 "nvme_admin": false, 00:05:32.790 "nvme_io": false, 00:05:32.790 "nvme_io_md": false, 00:05:32.790 "write_zeroes": true, 00:05:32.790 "zcopy": true, 00:05:32.790 "get_zone_info": false, 00:05:32.790 "zone_management": false, 00:05:32.790 "zone_append": false, 00:05:32.790 "compare": false, 00:05:32.790 "compare_and_write": false, 00:05:32.790 "abort": true, 00:05:32.790 "seek_hole": false, 00:05:32.790 "seek_data": false, 00:05:32.791 "copy": true, 00:05:32.791 "nvme_iov_md": false 00:05:32.791 }, 00:05:32.791 "memory_domains": [ 00:05:32.791 { 00:05:32.791 "dma_device_id": "system", 00:05:32.791 "dma_device_type": 1 00:05:32.791 }, 00:05:32.791 { 00:05:32.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.791 "dma_device_type": 2 00:05:32.791 } 00:05:32.791 ], 00:05:32.791 "driver_specific": { 00:05:32.791 "passthru": { 00:05:32.791 "name": "Passthru0", 00:05:32.791 "base_bdev_name": "Malloc0" 00:05:32.791 } 00:05:32.791 } 00:05:32.791 } 00:05:32.791 ]' 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.791 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.791 00:05:32.791 real 0m0.279s 00:05:32.791 user 0m0.178s 00:05:32.791 sys 0m0.035s 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.791 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.791 ************************************ 00:05:32.791 END TEST rpc_integrity 00:05:32.791 ************************************ 00:05:32.791 09:54:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.791 09:54:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.791 09:54:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.791 09:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.791 ************************************ 00:05:32.791 START TEST rpc_plugins 00:05:32.791 ************************************ 00:05:32.791 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:32.791 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.791 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.791 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.050 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:33.050 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:33.050 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.050 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.050 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:33.050 { 00:05:33.050 "name": "Malloc1", 00:05:33.050 "aliases": [ 00:05:33.050 "5e7fae29-ab1f-422e-b301-d3691b89da8d" 00:05:33.050 ], 00:05:33.050 "product_name": "Malloc disk", 00:05:33.050 "block_size": 4096, 00:05:33.050 "num_blocks": 256, 00:05:33.050 "uuid": "5e7fae29-ab1f-422e-b301-d3691b89da8d", 00:05:33.050 "assigned_rate_limits": { 00:05:33.050 "rw_ios_per_sec": 0, 00:05:33.050 "rw_mbytes_per_sec": 0, 00:05:33.050 "r_mbytes_per_sec": 0, 00:05:33.050 "w_mbytes_per_sec": 0 00:05:33.050 }, 00:05:33.050 "claimed": false, 00:05:33.050 "zoned": false, 00:05:33.050 "supported_io_types": { 00:05:33.050 "read": true, 00:05:33.050 "write": true, 00:05:33.050 "unmap": true, 00:05:33.050 "flush": true, 00:05:33.050 "reset": true, 00:05:33.050 "nvme_admin": false, 00:05:33.050 "nvme_io": false, 00:05:33.050 "nvme_io_md": false, 00:05:33.050 "write_zeroes": true, 00:05:33.050 "zcopy": true, 00:05:33.050 "get_zone_info": false, 00:05:33.050 "zone_management": false, 00:05:33.050 "zone_append": false, 00:05:33.050 "compare": false, 00:05:33.050 "compare_and_write": false, 00:05:33.050 "abort": true, 00:05:33.050 "seek_hole": false, 00:05:33.050 "seek_data": false, 00:05:33.050 "copy": true, 00:05:33.050 "nvme_iov_md": false 00:05:33.050 }, 00:05:33.050 "memory_domains": [ 00:05:33.050 { 00:05:33.050 "dma_device_id": "system", 00:05:33.050 "dma_device_type": 1 00:05:33.050 }, 00:05:33.050 { 00:05:33.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.050 "dma_device_type": 2 00:05:33.050 } 00:05:33.050 ], 00:05:33.050 "driver_specific": {} 00:05:33.050 } 00:05:33.050 ]' 00:05:33.050 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:33.050 09:54:18 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:33.050 00:05:33.050 real 0m0.136s 00:05:33.050 user 0m0.087s 00:05:33.050 sys 0m0.017s 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.050 09:54:18 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 ************************************ 00:05:33.050 END TEST rpc_plugins 00:05:33.050 ************************************ 00:05:33.050 09:54:18 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:33.050 09:54:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.050 09:54:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.050 09:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 ************************************ 00:05:33.050 START TEST rpc_trace_cmd_test 00:05:33.050 ************************************ 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.050 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:33.050 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2382440", 00:05:33.050 "tpoint_group_mask": "0x8", 00:05:33.050 "iscsi_conn": { 00:05:33.050 "mask": "0x2", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "scsi": { 00:05:33.050 "mask": "0x4", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "bdev": { 00:05:33.050 "mask": "0x8", 00:05:33.050 "tpoint_mask": "0xffffffffffffffff" 00:05:33.050 }, 00:05:33.050 "nvmf_rdma": { 00:05:33.050 "mask": "0x10", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "nvmf_tcp": { 00:05:33.050 "mask": "0x20", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "ftl": { 00:05:33.050 "mask": "0x40", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "blobfs": { 00:05:33.050 "mask": "0x80", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "dsa": { 00:05:33.050 "mask": "0x200", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "thread": { 00:05:33.050 "mask": "0x400", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "nvme_pcie": { 00:05:33.050 "mask": "0x800", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "iaa": { 00:05:33.050 "mask": "0x1000", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "nvme_tcp": { 00:05:33.050 "mask": "0x2000", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.050 }, 00:05:33.050 "bdev_nvme": { 00:05:33.050 "mask": "0x4000", 00:05:33.050 "tpoint_mask": "0x0" 00:05:33.051 }, 00:05:33.051 "sock": { 00:05:33.051 "mask": "0x8000", 00:05:33.051 "tpoint_mask": "0x0" 00:05:33.051 } 00:05:33.051 }' 00:05:33.051 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:33.051 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:33.051 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.309 00:05:33.309 real 0m0.211s 00:05:33.309 user 0m0.174s 00:05:33.309 sys 0m0.031s 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.309 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.309 ************************************ 00:05:33.309 END TEST rpc_trace_cmd_test 00:05:33.309 ************************************ 00:05:33.309 09:54:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.309 09:54:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.309 09:54:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.309 09:54:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.309 09:54:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.309 09:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.309 ************************************ 00:05:33.309 START TEST rpc_daemon_integrity 00:05:33.310 ************************************ 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.310 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.568 { 00:05:33.568 "name": "Malloc2", 00:05:33.568 "aliases": [ 00:05:33.568 "81590a19-560f-4562-aba2-67af825444c5" 00:05:33.568 ], 00:05:33.568 "product_name": "Malloc disk", 00:05:33.568 "block_size": 512, 00:05:33.568 "num_blocks": 16384, 00:05:33.568 "uuid": "81590a19-560f-4562-aba2-67af825444c5", 00:05:33.568 "assigned_rate_limits": { 00:05:33.568 "rw_ios_per_sec": 0, 00:05:33.568 "rw_mbytes_per_sec": 0, 00:05:33.568 "r_mbytes_per_sec": 0, 00:05:33.568 "w_mbytes_per_sec": 0 00:05:33.568 }, 00:05:33.568 "claimed": false, 00:05:33.568 "zoned": false, 00:05:33.568 "supported_io_types": { 00:05:33.568 "read": true, 00:05:33.568 "write": true, 00:05:33.568 "unmap": true, 00:05:33.568 "flush": true, 00:05:33.568 "reset": true, 00:05:33.568 "nvme_admin": false, 00:05:33.568 "nvme_io": false, 00:05:33.568 "nvme_io_md": false, 00:05:33.568 "write_zeroes": true, 00:05:33.568 "zcopy": true, 00:05:33.568 "get_zone_info": false, 00:05:33.568 "zone_management": false, 00:05:33.568 "zone_append": false, 00:05:33.568 "compare": false, 00:05:33.568 "compare_and_write": false, 00:05:33.568 "abort": true, 00:05:33.568 "seek_hole": false, 00:05:33.568 "seek_data": false, 00:05:33.568 "copy": true, 00:05:33.568 "nvme_iov_md": false 00:05:33.568 }, 00:05:33.568 "memory_domains": [ 00:05:33.568 { 00:05:33.568 "dma_device_id": "system", 00:05:33.568 "dma_device_type": 1 00:05:33.568 }, 00:05:33.568 { 00:05:33.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.568 "dma_device_type": 2 00:05:33.568 } 00:05:33.568 ], 00:05:33.568 "driver_specific": {} 00:05:33.568 } 00:05:33.568 ]' 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.568 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 [2024-07-25 09:54:18.562652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.569 [2024-07-25 09:54:18.562679] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.569 [2024-07-25 09:54:18.562689] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x188c6b0 00:05:33.569 [2024-07-25 09:54:18.562699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.569 [2024-07-25 09:54:18.563641] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.569 [2024-07-25 09:54:18.563662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.569 Passthru0 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.569 { 00:05:33.569 "name": "Malloc2", 00:05:33.569 "aliases": [ 00:05:33.569 "81590a19-560f-4562-aba2-67af825444c5" 00:05:33.569 ], 00:05:33.569 "product_name": "Malloc disk", 00:05:33.569 "block_size": 512, 00:05:33.569 "num_blocks": 16384, 00:05:33.569 "uuid": "81590a19-560f-4562-aba2-67af825444c5", 00:05:33.569 "assigned_rate_limits": { 00:05:33.569 "rw_ios_per_sec": 0, 00:05:33.569 "rw_mbytes_per_sec": 0, 00:05:33.569 "r_mbytes_per_sec": 0, 00:05:33.569 "w_mbytes_per_sec": 0 00:05:33.569 }, 00:05:33.569 "claimed": true, 00:05:33.569 "claim_type": "exclusive_write", 00:05:33.569 "zoned": false, 00:05:33.569 "supported_io_types": { 00:05:33.569 "read": true, 00:05:33.569 "write": true, 00:05:33.569 "unmap": true, 00:05:33.569 "flush": true, 00:05:33.569 "reset": true, 00:05:33.569 "nvme_admin": false, 00:05:33.569 "nvme_io": false, 00:05:33.569 "nvme_io_md": false, 00:05:33.569 "write_zeroes": true, 00:05:33.569 "zcopy": true, 00:05:33.569 "get_zone_info": false, 00:05:33.569 "zone_management": false, 00:05:33.569 "zone_append": false, 00:05:33.569 "compare": false, 00:05:33.569 "compare_and_write": false, 00:05:33.569 "abort": true, 00:05:33.569 "seek_hole": false, 00:05:33.569 "seek_data": false, 00:05:33.569 "copy": true, 00:05:33.569 "nvme_iov_md": false 00:05:33.569 }, 00:05:33.569 "memory_domains": [ 00:05:33.569 { 00:05:33.569 "dma_device_id": "system", 00:05:33.569 "dma_device_type": 1 00:05:33.569 }, 00:05:33.569 { 00:05:33.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.569 "dma_device_type": 2 00:05:33.569 } 00:05:33.569 ], 00:05:33.569 "driver_specific": {} 00:05:33.569 }, 00:05:33.569 { 00:05:33.569 "name": "Passthru0", 00:05:33.569 "aliases": [ 00:05:33.569 "be19f3e8-8891-56f4-8b2d-02c8450be910" 00:05:33.569 ], 00:05:33.569 "product_name": "passthru", 00:05:33.569 "block_size": 512, 00:05:33.569 "num_blocks": 16384, 00:05:33.569 "uuid": "be19f3e8-8891-56f4-8b2d-02c8450be910", 00:05:33.569 "assigned_rate_limits": { 00:05:33.569 "rw_ios_per_sec": 0, 00:05:33.569 "rw_mbytes_per_sec": 0, 00:05:33.569 "r_mbytes_per_sec": 0, 00:05:33.569 "w_mbytes_per_sec": 0 00:05:33.569 }, 00:05:33.569 "claimed": false, 00:05:33.569 "zoned": false, 00:05:33.569 "supported_io_types": { 00:05:33.569 "read": true, 00:05:33.569 "write": true, 00:05:33.569 "unmap": true, 00:05:33.569 "flush": true, 00:05:33.569 "reset": true, 00:05:33.569 "nvme_admin": false, 00:05:33.569 "nvme_io": false, 00:05:33.569 "nvme_io_md": false, 00:05:33.569 "write_zeroes": true, 00:05:33.569 "zcopy": true, 00:05:33.569 "get_zone_info": false, 00:05:33.569 "zone_management": false, 00:05:33.569 "zone_append": false, 00:05:33.569 "compare": false, 00:05:33.569 "compare_and_write": false, 00:05:33.569 "abort": true, 00:05:33.569 "seek_hole": false, 00:05:33.569 "seek_data": false, 00:05:33.569 "copy": true, 00:05:33.569 "nvme_iov_md": false 00:05:33.569 }, 00:05:33.569 "memory_domains": [ 00:05:33.569 { 00:05:33.569 "dma_device_id": "system", 00:05:33.569 "dma_device_type": 1 00:05:33.569 }, 00:05:33.569 { 00:05:33.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.569 "dma_device_type": 2 00:05:33.569 } 00:05:33.569 ], 00:05:33.569 "driver_specific": { 00:05:33.569 "passthru": { 00:05:33.569 "name": "Passthru0", 00:05:33.569 "base_bdev_name": "Malloc2" 00:05:33.569 } 00:05:33.569 } 00:05:33.569 } 00:05:33.569 ]' 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.569 00:05:33.569 real 0m0.285s 00:05:33.569 user 0m0.182s 00:05:33.569 sys 0m0.035s 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.569 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.569 ************************************ 00:05:33.569 END TEST rpc_daemon_integrity 00:05:33.569 ************************************ 00:05:33.828 09:54:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.828 09:54:18 rpc -- rpc/rpc.sh@84 -- # killprocess 2382440 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 2382440 ']' 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@954 -- # kill -0 2382440 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2382440 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2382440' 00:05:33.828 killing process with pid 2382440 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@969 -- # kill 2382440 00:05:33.828 09:54:18 rpc -- common/autotest_common.sh@974 -- # wait 2382440 00:05:34.087 00:05:34.087 real 0m2.469s 00:05:34.087 user 0m3.167s 00:05:34.087 sys 0m0.698s 00:05:34.087 09:54:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.087 09:54:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.087 ************************************ 00:05:34.087 END TEST rpc 00:05:34.087 ************************************ 00:05:34.087 09:54:19 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.087 09:54:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.087 09:54:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.087 09:54:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.087 ************************************ 00:05:34.087 START TEST skip_rpc 00:05:34.087 ************************************ 00:05:34.087 09:54:19 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.087 * Looking for test storage... 00:05:34.387 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:05:34.387 09:54:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:34.387 09:54:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:34.387 09:54:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.387 09:54:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.387 09:54:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.387 09:54:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.387 ************************************ 00:05:34.387 START TEST skip_rpc 00:05:34.387 ************************************ 00:05:34.387 09:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:34.387 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2383079 00:05:34.387 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.387 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.388 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.388 [2024-07-25 09:54:19.324713] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:34.388 [2024-07-25 09:54:19.324748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2383079 ] 00:05:34.388 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.388 [2024-07-25 09:54:19.388927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.388 [2024-07-25 09:54:19.459659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2383079 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2383079 ']' 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2383079 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2383079 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2383079' 00:05:39.682 killing process with pid 2383079 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2383079 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2383079 00:05:39.682 00:05:39.682 real 0m5.362s 00:05:39.682 user 0m5.131s 00:05:39.682 sys 0m0.256s 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.682 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.682 ************************************ 00:05:39.682 END TEST skip_rpc 00:05:39.682 ************************************ 00:05:39.682 09:54:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.682 09:54:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.682 09:54:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.682 09:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.682 ************************************ 00:05:39.682 START TEST skip_rpc_with_json 00:05:39.682 ************************************ 00:05:39.682 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:39.682 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2384025 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2384025 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2384025 ']' 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.683 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.683 [2024-07-25 09:54:24.762636] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:39.683 [2024-07-25 09:54:24.762676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2384025 ] 00:05:39.683 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.683 [2024-07-25 09:54:24.827434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.941 [2024-07-25 09:54:24.896149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.509 [2024-07-25 09:54:25.565811] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.509 request: 00:05:40.509 { 00:05:40.509 "trtype": "tcp", 00:05:40.509 "method": "nvmf_get_transports", 00:05:40.509 "req_id": 1 00:05:40.509 } 00:05:40.509 Got JSON-RPC error response 00:05:40.509 response: 00:05:40.509 { 00:05:40.509 "code": -19, 00:05:40.509 "message": "No such device" 00:05:40.509 } 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.509 [2024-07-25 09:54:25.577918] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.509 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.767 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.767 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:40.767 { 00:05:40.767 "subsystems": [ 00:05:40.767 { 00:05:40.767 "subsystem": "keyring", 00:05:40.767 "config": [] 00:05:40.767 }, 00:05:40.767 { 00:05:40.767 "subsystem": "iobuf", 00:05:40.767 "config": [ 00:05:40.767 { 00:05:40.767 "method": "iobuf_set_options", 00:05:40.767 "params": { 00:05:40.767 "small_pool_count": 8192, 00:05:40.767 "large_pool_count": 1024, 00:05:40.767 "small_bufsize": 8192, 00:05:40.767 "large_bufsize": 135168 00:05:40.767 } 00:05:40.767 } 00:05:40.767 ] 00:05:40.767 }, 00:05:40.767 { 00:05:40.767 "subsystem": "sock", 00:05:40.767 "config": [ 00:05:40.767 { 00:05:40.767 "method": "sock_set_default_impl", 00:05:40.767 "params": { 00:05:40.767 "impl_name": "posix" 00:05:40.767 } 00:05:40.767 }, 00:05:40.767 { 00:05:40.767 "method": "sock_impl_set_options", 00:05:40.767 "params": { 00:05:40.767 "impl_name": "ssl", 00:05:40.767 "recv_buf_size": 4096, 00:05:40.767 "send_buf_size": 4096, 00:05:40.767 "enable_recv_pipe": true, 00:05:40.767 "enable_quickack": false, 00:05:40.767 "enable_placement_id": 0, 00:05:40.767 "enable_zerocopy_send_server": true, 00:05:40.767 "enable_zerocopy_send_client": false, 00:05:40.767 "zerocopy_threshold": 0, 00:05:40.767 "tls_version": 0, 00:05:40.767 "enable_ktls": false 00:05:40.767 } 00:05:40.767 }, 00:05:40.767 { 00:05:40.767 "method": "sock_impl_set_options", 00:05:40.767 "params": { 00:05:40.767 "impl_name": "posix", 00:05:40.767 "recv_buf_size": 2097152, 00:05:40.767 "send_buf_size": 2097152, 00:05:40.767 "enable_recv_pipe": true, 00:05:40.767 "enable_quickack": false, 00:05:40.767 "enable_placement_id": 0, 00:05:40.767 "enable_zerocopy_send_server": true, 00:05:40.767 "enable_zerocopy_send_client": false, 00:05:40.767 "zerocopy_threshold": 0, 00:05:40.767 "tls_version": 0, 00:05:40.767 "enable_ktls": false 00:05:40.767 } 00:05:40.767 } 00:05:40.767 ] 00:05:40.767 }, 00:05:40.767 { 00:05:40.768 "subsystem": "vmd", 00:05:40.768 "config": [] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "accel", 00:05:40.768 "config": [ 00:05:40.768 { 00:05:40.768 "method": "accel_set_options", 00:05:40.768 "params": { 00:05:40.768 "small_cache_size": 128, 00:05:40.768 "large_cache_size": 16, 00:05:40.768 "task_count": 2048, 00:05:40.768 "sequence_count": 2048, 00:05:40.768 "buf_count": 2048 00:05:40.768 } 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "bdev", 00:05:40.768 "config": [ 00:05:40.768 { 00:05:40.768 "method": "bdev_set_options", 00:05:40.768 "params": { 00:05:40.768 "bdev_io_pool_size": 65535, 00:05:40.768 "bdev_io_cache_size": 256, 00:05:40.768 "bdev_auto_examine": true, 00:05:40.768 "iobuf_small_cache_size": 128, 00:05:40.768 "iobuf_large_cache_size": 16 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "bdev_raid_set_options", 00:05:40.768 "params": { 00:05:40.768 "process_window_size_kb": 1024, 00:05:40.768 "process_max_bandwidth_mb_sec": 0 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "bdev_iscsi_set_options", 00:05:40.768 "params": { 00:05:40.768 "timeout_sec": 30 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "bdev_nvme_set_options", 00:05:40.768 "params": { 00:05:40.768 "action_on_timeout": "none", 00:05:40.768 "timeout_us": 0, 00:05:40.768 "timeout_admin_us": 0, 00:05:40.768 "keep_alive_timeout_ms": 10000, 00:05:40.768 "arbitration_burst": 0, 00:05:40.768 "low_priority_weight": 0, 00:05:40.768 "medium_priority_weight": 0, 00:05:40.768 "high_priority_weight": 0, 00:05:40.768 "nvme_adminq_poll_period_us": 10000, 00:05:40.768 "nvme_ioq_poll_period_us": 0, 00:05:40.768 "io_queue_requests": 0, 00:05:40.768 "delay_cmd_submit": true, 00:05:40.768 "transport_retry_count": 4, 00:05:40.768 "bdev_retry_count": 3, 00:05:40.768 "transport_ack_timeout": 0, 00:05:40.768 "ctrlr_loss_timeout_sec": 0, 00:05:40.768 "reconnect_delay_sec": 0, 00:05:40.768 "fast_io_fail_timeout_sec": 0, 00:05:40.768 "disable_auto_failback": false, 00:05:40.768 "generate_uuids": false, 00:05:40.768 "transport_tos": 0, 00:05:40.768 "nvme_error_stat": false, 00:05:40.768 "rdma_srq_size": 0, 00:05:40.768 "io_path_stat": false, 00:05:40.768 "allow_accel_sequence": false, 00:05:40.768 "rdma_max_cq_size": 0, 00:05:40.768 "rdma_cm_event_timeout_ms": 0, 00:05:40.768 "dhchap_digests": [ 00:05:40.768 "sha256", 00:05:40.768 "sha384", 00:05:40.768 "sha512" 00:05:40.768 ], 00:05:40.768 "dhchap_dhgroups": [ 00:05:40.768 "null", 00:05:40.768 "ffdhe2048", 00:05:40.768 "ffdhe3072", 00:05:40.768 "ffdhe4096", 00:05:40.768 "ffdhe6144", 00:05:40.768 "ffdhe8192" 00:05:40.768 ] 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "bdev_nvme_set_hotplug", 00:05:40.768 "params": { 00:05:40.768 "period_us": 100000, 00:05:40.768 "enable": false 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "bdev_wait_for_examine" 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "scsi", 00:05:40.768 "config": null 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "scheduler", 00:05:40.768 "config": [ 00:05:40.768 { 00:05:40.768 "method": "framework_set_scheduler", 00:05:40.768 "params": { 00:05:40.768 "name": "static" 00:05:40.768 } 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "vhost_scsi", 00:05:40.768 "config": [] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "vhost_blk", 00:05:40.768 "config": [] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "ublk", 00:05:40.768 "config": [] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "nbd", 00:05:40.768 "config": [] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "nvmf", 00:05:40.768 "config": [ 00:05:40.768 { 00:05:40.768 "method": "nvmf_set_config", 00:05:40.768 "params": { 00:05:40.768 "discovery_filter": "match_any", 00:05:40.768 "admin_cmd_passthru": { 00:05:40.768 "identify_ctrlr": false 00:05:40.768 } 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "nvmf_set_max_subsystems", 00:05:40.768 "params": { 00:05:40.768 "max_subsystems": 1024 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "nvmf_set_crdt", 00:05:40.768 "params": { 00:05:40.768 "crdt1": 0, 00:05:40.768 "crdt2": 0, 00:05:40.768 "crdt3": 0 00:05:40.768 } 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "method": "nvmf_create_transport", 00:05:40.768 "params": { 00:05:40.768 "trtype": "TCP", 00:05:40.768 "max_queue_depth": 128, 00:05:40.768 "max_io_qpairs_per_ctrlr": 127, 00:05:40.768 "in_capsule_data_size": 4096, 00:05:40.768 "max_io_size": 131072, 00:05:40.768 "io_unit_size": 131072, 00:05:40.768 "max_aq_depth": 128, 00:05:40.768 "num_shared_buffers": 511, 00:05:40.768 "buf_cache_size": 4294967295, 00:05:40.768 "dif_insert_or_strip": false, 00:05:40.768 "zcopy": false, 00:05:40.768 "c2h_success": true, 00:05:40.768 "sock_priority": 0, 00:05:40.768 "abort_timeout_sec": 1, 00:05:40.768 "ack_timeout": 0, 00:05:40.768 "data_wr_pool_size": 0 00:05:40.768 } 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 }, 00:05:40.768 { 00:05:40.768 "subsystem": "iscsi", 00:05:40.768 "config": [ 00:05:40.768 { 00:05:40.768 "method": "iscsi_set_options", 00:05:40.768 "params": { 00:05:40.768 "node_base": "iqn.2016-06.io.spdk", 00:05:40.768 "max_sessions": 128, 00:05:40.768 "max_connections_per_session": 2, 00:05:40.768 "max_queue_depth": 64, 00:05:40.768 "default_time2wait": 2, 00:05:40.768 "default_time2retain": 20, 00:05:40.768 "first_burst_length": 8192, 00:05:40.768 "immediate_data": true, 00:05:40.768 "allow_duplicated_isid": false, 00:05:40.768 "error_recovery_level": 0, 00:05:40.768 "nop_timeout": 60, 00:05:40.768 "nop_in_interval": 30, 00:05:40.768 "disable_chap": false, 00:05:40.768 "require_chap": false, 00:05:40.768 "mutual_chap": false, 00:05:40.768 "chap_group": 0, 00:05:40.768 "max_large_datain_per_connection": 64, 00:05:40.768 "max_r2t_per_connection": 4, 00:05:40.768 "pdu_pool_size": 36864, 00:05:40.768 "immediate_data_pool_size": 16384, 00:05:40.768 "data_out_pool_size": 2048 00:05:40.768 } 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 } 00:05:40.768 ] 00:05:40.768 } 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2384025 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2384025 ']' 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2384025 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2384025 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2384025' 00:05:40.768 killing process with pid 2384025 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2384025 00:05:40.768 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2384025 00:05:41.027 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2384261 00:05:41.027 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:41.027 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2384261 ']' 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2384261' 00:05:46.295 killing process with pid 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2384261 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:46.295 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/log.txt 00:05:46.295 00:05:46.295 real 0m6.738s 00:05:46.296 user 0m6.567s 00:05:46.296 sys 0m0.596s 00:05:46.296 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.296 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.296 ************************************ 00:05:46.296 END TEST skip_rpc_with_json 00:05:46.296 ************************************ 00:05:46.554 09:54:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.554 ************************************ 00:05:46.554 START TEST skip_rpc_with_delay 00:05:46.554 ************************************ 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.554 [2024-07-25 09:54:31.570975] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.554 [2024-07-25 09:54:31.571038] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.554 00:05:46.554 real 0m0.065s 00:05:46.554 user 0m0.046s 00:05:46.554 sys 0m0.019s 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.554 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.554 ************************************ 00:05:46.554 END TEST skip_rpc_with_delay 00:05:46.554 ************************************ 00:05:46.554 09:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.554 09:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.554 09:54:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.554 09:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.554 ************************************ 00:05:46.554 START TEST exit_on_failed_rpc_init 00:05:46.554 ************************************ 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2385238 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2385238 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2385238 ']' 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.554 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.554 [2024-07-25 09:54:31.704595] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:46.554 [2024-07-25 09:54:31.704636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385238 ] 00:05:46.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.813 [2024-07-25 09:54:31.769454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.813 [2024-07-25 09:54:31.847996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.380 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.639 [2024-07-25 09:54:32.550352] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:47.639 [2024-07-25 09:54:32.550400] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385446 ] 00:05:47.639 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.639 [2024-07-25 09:54:32.616812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.639 [2024-07-25 09:54:32.689787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.639 [2024-07-25 09:54:32.689852] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.639 [2024-07-25 09:54:32.689860] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.639 [2024-07-25 09:54:32.689867] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2385238 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2385238 ']' 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2385238 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2385238 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2385238' 00:05:47.639 killing process with pid 2385238 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2385238 00:05:47.639 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2385238 00:05:48.207 00:05:48.207 real 0m1.444s 00:05:48.207 user 0m1.653s 00:05:48.207 sys 0m0.403s 00:05:48.207 09:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.207 09:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.207 ************************************ 00:05:48.207 END TEST exit_on_failed_rpc_init 00:05:48.207 ************************************ 00:05:48.207 09:54:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/config.json 00:05:48.207 00:05:48.207 real 0m13.973s 00:05:48.207 user 0m13.547s 00:05:48.207 sys 0m1.514s 00:05:48.207 09:54:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.207 09:54:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.207 ************************************ 00:05:48.207 END TEST skip_rpc 00:05:48.207 ************************************ 00:05:48.207 09:54:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.207 09:54:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.207 09:54:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.207 09:54:33 -- common/autotest_common.sh@10 -- # set +x 00:05:48.207 ************************************ 00:05:48.207 START TEST rpc_client 00:05:48.208 ************************************ 00:05:48.208 09:54:33 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.208 * Looking for test storage... 00:05:48.208 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:05:48.208 09:54:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:48.208 OK 00:05:48.208 09:54:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:48.208 00:05:48.208 real 0m0.114s 00:05:48.208 user 0m0.050s 00:05:48.208 sys 0m0.071s 00:05:48.208 09:54:33 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.208 09:54:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:48.208 ************************************ 00:05:48.208 END TEST rpc_client 00:05:48.208 ************************************ 00:05:48.208 09:54:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.208 09:54:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.208 09:54:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.208 09:54:33 -- common/autotest_common.sh@10 -- # set +x 00:05:48.467 ************************************ 00:05:48.467 START TEST json_config 00:05:48.467 ************************************ 00:05:48.467 09:54:33 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.467 09:54:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:05:48.467 09:54:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.467 09:54:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.467 09:54:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.467 09:54:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.467 09:54:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.467 09:54:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.467 09:54:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:48.467 09:54:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.467 09:54:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:05:48.467 09:54:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:48.467 09:54:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:48.468 INFO: JSON configuration test init 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.468 09:54:33 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:48.468 09:54:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:48.468 09:54:33 json_config -- json_config/common.sh@10 -- # shift 00:05:48.468 09:54:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.468 09:54:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.468 09:54:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.468 09:54:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.468 09:54:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.468 09:54:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2385597 00:05:48.468 09:54:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.468 Waiting for target to run... 00:05:48.468 09:54:33 json_config -- json_config/common.sh@25 -- # waitforlisten 2385597 /var/tmp/spdk_tgt.sock 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 2385597 ']' 00:05:48.468 09:54:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.468 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.468 [2024-07-25 09:54:33.530742] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.468 [2024-07-25 09:54:33.530793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2385597 ] 00:05:48.468 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.035 [2024-07-25 09:54:33.980847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.035 [2024-07-25 09:54:34.070310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:49.293 09:54:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:49.293 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.293 09:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.293 09:54:34 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:49.293 09:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:52.577 09:54:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@51 -- # sort 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@237 -- # [[ rdma == \r\d\m\a ]] 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@238 -- # TEST_TRANSPORT=rdma 00:05:52.577 09:54:37 json_config -- json_config/json_config.sh@238 -- # nvmftestinit 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:52.577 09:54:37 json_config -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:05:52.577 09:54:37 json_config -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:52.578 09:54:37 json_config -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:05:52.578 09:54:37 json_config -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:52.578 09:54:37 json_config -- nvmf/common.sh@285 -- # xtrace_disable 00:05:52.578 09:54:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@291 -- # pci_devs=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@295 -- # net_devs=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@296 -- # e810=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@296 -- # local -ga e810 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@297 -- # x722=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@297 -- # local -ga x722 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@298 -- # mlx=() 00:05:59.140 09:54:43 json_config -- nvmf/common.sh@298 -- # local -ga mlx 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:05:59.141 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:05:59.141 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:05:59.141 Found net devices under 0000:da:00.0: mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:05:59.141 Found net devices under 0000:da:00.1: mlx_0_1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@414 -- # is_hw=yes 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@420 -- # rdma_device_init 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@58 -- # uname 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@62 -- # modprobe ib_cm 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@63 -- # modprobe ib_core 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@64 -- # modprobe ib_umad 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@66 -- # modprobe iw_cm 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@502 -- # allocate_nic_ips 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@73 -- # get_rdma_if_list 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:05:59.141 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.141 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:05:59.141 altname enp218s0f0np0 00:05:59.141 altname ens818f0np0 00:05:59.141 inet 192.168.100.8/24 scope global mlx_0_0 00:05:59.141 valid_lft forever preferred_lft forever 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:05:59.141 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:05:59.141 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:05:59.141 altname enp218s0f1np1 00:05:59.141 altname ens818f1np1 00:05:59.141 inet 192.168.100.9/24 scope global mlx_0_1 00:05:59.141 valid_lft forever preferred_lft forever 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@422 -- # return 0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@86 -- # get_rdma_if_list 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_0 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:05:59.141 09:54:43 json_config -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@104 -- # echo mlx_0_1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@105 -- # continue 2 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # awk '{print $4}' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@113 -- # cut -d/ -f1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:05:59.142 192.168.100.9' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:05:59.142 192.168.100.9' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@457 -- # head -n 1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:05:59.142 192.168.100.9' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@458 -- # tail -n +2 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@458 -- # head -n 1 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:05:59.142 09:54:43 json_config -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@241 -- # [[ -z 192.168.100.8 ]] 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.142 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:59.142 MallocForNvmf0 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.142 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:59.142 MallocForNvmf1 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.142 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:05:59.142 [2024-07-25 09:54:43.700971] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:05:59.142 [2024-07-25 09:54:43.738669] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1955720/0x1ab7d00) succeed. 00:05:59.142 [2024-07-25 09:54:43.751389] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1954710/0x1997bc0) succeed. 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.142 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:59.142 09:54:43 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.142 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:59.142 09:54:44 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.142 09:54:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:59.142 09:54:44 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:59.142 09:54:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:05:59.401 [2024-07-25 09:54:44.428607] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:05:59.401 09:54:44 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:59.401 09:54:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.401 09:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.401 09:54:44 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:59.401 09:54:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.401 09:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.401 09:54:44 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:59.401 09:54:44 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.401 09:54:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:59.660 MallocBdevForConfigChangeCheck 00:05:59.660 09:54:44 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:59.660 09:54:44 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:59.660 09:54:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.660 09:54:44 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:59.660 09:54:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.919 09:54:45 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:59.919 INFO: shutting down applications... 00:05:59.919 09:54:45 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:59.919 09:54:45 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:59.919 09:54:45 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:59.919 09:54:45 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:02.451 Calling clear_iscsi_subsystem 00:06:02.451 Calling clear_nvmf_subsystem 00:06:02.451 Calling clear_nbd_subsystem 00:06:02.451 Calling clear_ublk_subsystem 00:06:02.451 Calling clear_vhost_blk_subsystem 00:06:02.451 Calling clear_vhost_scsi_subsystem 00:06:02.451 Calling clear_bdev_subsystem 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@349 -- # break 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:02.451 09:54:47 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:02.451 09:54:47 json_config -- json_config/common.sh@31 -- # local app=target 00:06:02.451 09:54:47 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:02.451 09:54:47 json_config -- json_config/common.sh@35 -- # [[ -n 2385597 ]] 00:06:02.451 09:54:47 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2385597 00:06:02.451 09:54:47 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:02.451 09:54:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:02.451 09:54:47 json_config -- json_config/common.sh@41 -- # kill -0 2385597 00:06:02.451 09:54:47 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:03.020 09:54:47 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:03.020 09:54:47 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:03.020 09:54:47 json_config -- json_config/common.sh@41 -- # kill -0 2385597 00:06:03.020 09:54:47 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:03.020 09:54:47 json_config -- json_config/common.sh@43 -- # break 00:06:03.020 09:54:47 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:03.020 09:54:47 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:03.020 SPDK target shutdown done 00:06:03.020 09:54:47 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:03.020 INFO: relaunching applications... 00:06:03.020 09:54:47 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.020 09:54:47 json_config -- json_config/common.sh@9 -- # local app=target 00:06:03.020 09:54:47 json_config -- json_config/common.sh@10 -- # shift 00:06:03.020 09:54:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:03.020 09:54:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:03.020 09:54:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:03.020 09:54:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.020 09:54:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:03.020 09:54:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2390327 00:06:03.020 09:54:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:03.020 Waiting for target to run... 00:06:03.020 09:54:47 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:03.020 09:54:47 json_config -- json_config/common.sh@25 -- # waitforlisten 2390327 /var/tmp/spdk_tgt.sock 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@831 -- # '[' -z 2390327 ']' 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:03.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.020 09:54:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:03.020 [2024-07-25 09:54:47.997270] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:03.020 [2024-07-25 09:54:47.997323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390327 ] 00:06:03.020 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.310 [2024-07-25 09:54:48.272372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.310 [2024-07-25 09:54:48.339467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.595 [2024-07-25 09:54:51.384121] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ece1f0/0x1e54600) succeed. 00:06:06.595 [2024-07-25 09:54:51.395437] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ecd1e0/0x1d34480) succeed. 00:06:06.595 [2024-07-25 09:54:51.453752] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:07.163 09:54:52 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.163 09:54:52 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:07.163 09:54:52 json_config -- json_config/common.sh@26 -- # echo '' 00:06:07.163 00:06:07.163 09:54:52 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:07.163 09:54:52 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:07.163 INFO: Checking if target configuration is the same... 00:06:07.163 09:54:52 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:07.163 09:54:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.163 09:54:52 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.163 + '[' 2 -ne 2 ']' 00:06:07.163 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.163 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:07.163 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:07.163 +++ basename /dev/fd/62 00:06:07.163 ++ mktemp /tmp/62.XXX 00:06:07.163 + tmp_file_1=/tmp/62.3IL 00:06:07.163 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.163 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.163 + tmp_file_2=/tmp/spdk_tgt_config.json.q8e 00:06:07.163 + ret=0 00:06:07.163 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.421 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.421 + diff -u /tmp/62.3IL /tmp/spdk_tgt_config.json.q8e 00:06:07.421 + echo 'INFO: JSON config files are the same' 00:06:07.421 INFO: JSON config files are the same 00:06:07.421 + rm /tmp/62.3IL /tmp/spdk_tgt_config.json.q8e 00:06:07.421 + exit 0 00:06:07.421 09:54:52 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:07.421 09:54:52 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:07.421 INFO: changing configuration and checking if this can be detected... 00:06:07.421 09:54:52 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.421 09:54:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:07.679 09:54:52 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.679 09:54:52 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:07.679 09:54:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.679 + '[' 2 -ne 2 ']' 00:06:07.680 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:07.680 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:07.680 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:07.680 +++ basename /dev/fd/62 00:06:07.680 ++ mktemp /tmp/62.XXX 00:06:07.680 + tmp_file_1=/tmp/62.XwC 00:06:07.680 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.680 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:07.680 + tmp_file_2=/tmp/spdk_tgt_config.json.9nZ 00:06:07.680 + ret=0 00:06:07.680 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.938 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:07.938 + diff -u /tmp/62.XwC /tmp/spdk_tgt_config.json.9nZ 00:06:07.938 + ret=1 00:06:07.938 + echo '=== Start of file: /tmp/62.XwC ===' 00:06:07.938 + cat /tmp/62.XwC 00:06:07.938 + echo '=== End of file: /tmp/62.XwC ===' 00:06:07.938 + echo '' 00:06:07.938 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9nZ ===' 00:06:07.938 + cat /tmp/spdk_tgt_config.json.9nZ 00:06:07.938 + echo '=== End of file: /tmp/spdk_tgt_config.json.9nZ ===' 00:06:07.938 + echo '' 00:06:07.938 + rm /tmp/62.XwC /tmp/spdk_tgt_config.json.9nZ 00:06:07.938 + exit 1 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:07.938 INFO: configuration change detected. 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@321 -- # [[ -n 2390327 ]] 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.938 09:54:53 json_config -- json_config/json_config.sh@327 -- # killprocess 2390327 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@950 -- # '[' -z 2390327 ']' 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@954 -- # kill -0 2390327 00:06:07.938 09:54:53 json_config -- common/autotest_common.sh@955 -- # uname 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2390327 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2390327' 00:06:08.197 killing process with pid 2390327 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@969 -- # kill 2390327 00:06:08.197 09:54:53 json_config -- common/autotest_common.sh@974 -- # wait 2390327 00:06:10.099 09:54:55 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:10.099 09:54:55 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:10.099 09:54:55 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:10.099 09:54:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.099 09:54:55 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:10.099 09:54:55 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:10.099 INFO: Success 00:06:10.099 09:54:55 json_config -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@117 -- # sync 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@119 -- # '[' '' == tcp ']' 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@119 -- # '[' '' == rdma ']' 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:10.099 09:54:55 json_config -- nvmf/common.sh@495 -- # [[ '' == \t\c\p ]] 00:06:10.099 00:06:10.099 real 0m21.832s 00:06:10.099 user 0m24.118s 00:06:10.099 sys 0m6.106s 00:06:10.099 09:54:55 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.099 09:54:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:10.099 ************************************ 00:06:10.099 END TEST json_config 00:06:10.099 ************************************ 00:06:10.099 09:54:55 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.099 09:54:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.099 09:54:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.099 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.358 ************************************ 00:06:10.358 START TEST json_config_extra_key 00:06:10.358 ************************************ 00:06:10.358 09:54:55 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:10.358 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.358 09:54:55 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:10.359 09:54:55 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.359 09:54:55 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.359 09:54:55 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.359 09:54:55 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.359 09:54:55 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.359 09:54:55 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.359 09:54:55 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:10.359 09:54:55 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:10.359 09:54:55 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/common.sh 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:10.359 INFO: launching applications... 00:06:10.359 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2391601 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:10.359 Waiting for target to run... 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2391601 /var/tmp/spdk_tgt.sock 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2391601 ']' 00:06:10.359 09:54:55 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:10.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.359 09:54:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.359 [2024-07-25 09:54:55.427050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.359 [2024-07-25 09:54:55.427101] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391601 ] 00:06:10.359 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.618 [2024-07-25 09:54:55.707836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.877 [2024-07-25 09:54:55.782303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.136 09:54:56 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.136 09:54:56 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:11.136 00:06:11.136 09:54:56 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:11.136 INFO: shutting down applications... 00:06:11.136 09:54:56 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2391601 ]] 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2391601 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2391601 00:06:11.136 09:54:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2391601 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:11.704 09:54:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:11.704 SPDK target shutdown done 00:06:11.704 09:54:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:11.704 Success 00:06:11.704 00:06:11.704 real 0m1.443s 00:06:11.704 user 0m1.210s 00:06:11.704 sys 0m0.366s 00:06:11.704 09:54:56 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.704 09:54:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:11.704 ************************************ 00:06:11.704 END TEST json_config_extra_key 00:06:11.704 ************************************ 00:06:11.704 09:54:56 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.704 09:54:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.704 09:54:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.704 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.704 ************************************ 00:06:11.704 START TEST alias_rpc 00:06:11.704 ************************************ 00:06:11.704 09:54:56 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:11.964 * Looking for test storage... 00:06:11.964 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:11.964 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:11.964 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2391893 00:06:11.964 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.964 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2391893 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2391893 ']' 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.964 09:54:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.964 [2024-07-25 09:54:56.930836] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:11.964 [2024-07-25 09:54:56.930894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391893 ] 00:06:11.964 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.964 [2024-07-25 09:54:56.998523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.964 [2024-07-25 09:54:57.070329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.901 09:54:57 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:12.901 09:54:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2391893 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2391893 ']' 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2391893 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2391893 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2391893' 00:06:12.901 killing process with pid 2391893 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@969 -- # kill 2391893 00:06:12.901 09:54:57 alias_rpc -- common/autotest_common.sh@974 -- # wait 2391893 00:06:13.160 00:06:13.160 real 0m1.481s 00:06:13.160 user 0m1.639s 00:06:13.160 sys 0m0.378s 00:06:13.160 09:54:58 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.160 09:54:58 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.160 ************************************ 00:06:13.160 END TEST alias_rpc 00:06:13.160 ************************************ 00:06:13.160 09:54:58 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:13.160 09:54:58 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.160 09:54:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.160 09:54:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.160 09:54:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.418 ************************************ 00:06:13.418 START TEST spdkcli_tcp 00:06:13.418 ************************************ 00:06:13.418 09:54:58 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:13.418 * Looking for test storage... 00:06:13.418 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:13.418 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2392282 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2392282 00:06:13.419 09:54:58 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2392282 ']' 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.419 09:54:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.419 [2024-07-25 09:54:58.488033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.419 [2024-07-25 09:54:58.488085] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392282 ] 00:06:13.419 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.419 [2024-07-25 09:54:58.555977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.678 [2024-07-25 09:54:58.631843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.678 [2024-07-25 09:54:58.631844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.245 09:54:59 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.245 09:54:59 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:14.245 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2392400 00:06:14.245 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:14.245 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:14.505 [ 00:06:14.505 "bdev_malloc_delete", 00:06:14.505 "bdev_malloc_create", 00:06:14.505 "bdev_null_resize", 00:06:14.505 "bdev_null_delete", 00:06:14.505 "bdev_null_create", 00:06:14.505 "bdev_nvme_cuse_unregister", 00:06:14.505 "bdev_nvme_cuse_register", 00:06:14.505 "bdev_opal_new_user", 00:06:14.505 "bdev_opal_set_lock_state", 00:06:14.505 "bdev_opal_delete", 00:06:14.505 "bdev_opal_get_info", 00:06:14.505 "bdev_opal_create", 00:06:14.505 "bdev_nvme_opal_revert", 00:06:14.505 "bdev_nvme_opal_init", 00:06:14.505 "bdev_nvme_send_cmd", 00:06:14.505 "bdev_nvme_get_path_iostat", 00:06:14.505 "bdev_nvme_get_mdns_discovery_info", 00:06:14.505 "bdev_nvme_stop_mdns_discovery", 00:06:14.505 "bdev_nvme_start_mdns_discovery", 00:06:14.505 "bdev_nvme_set_multipath_policy", 00:06:14.505 "bdev_nvme_set_preferred_path", 00:06:14.505 "bdev_nvme_get_io_paths", 00:06:14.505 "bdev_nvme_remove_error_injection", 00:06:14.505 "bdev_nvme_add_error_injection", 00:06:14.505 "bdev_nvme_get_discovery_info", 00:06:14.505 "bdev_nvme_stop_discovery", 00:06:14.505 "bdev_nvme_start_discovery", 00:06:14.505 "bdev_nvme_get_controller_health_info", 00:06:14.505 "bdev_nvme_disable_controller", 00:06:14.505 "bdev_nvme_enable_controller", 00:06:14.505 "bdev_nvme_reset_controller", 00:06:14.505 "bdev_nvme_get_transport_statistics", 00:06:14.505 "bdev_nvme_apply_firmware", 00:06:14.505 "bdev_nvme_detach_controller", 00:06:14.505 "bdev_nvme_get_controllers", 00:06:14.505 "bdev_nvme_attach_controller", 00:06:14.505 "bdev_nvme_set_hotplug", 00:06:14.505 "bdev_nvme_set_options", 00:06:14.505 "bdev_passthru_delete", 00:06:14.505 "bdev_passthru_create", 00:06:14.505 "bdev_lvol_set_parent_bdev", 00:06:14.505 "bdev_lvol_set_parent", 00:06:14.505 "bdev_lvol_check_shallow_copy", 00:06:14.505 "bdev_lvol_start_shallow_copy", 00:06:14.505 "bdev_lvol_grow_lvstore", 00:06:14.505 "bdev_lvol_get_lvols", 00:06:14.505 "bdev_lvol_get_lvstores", 00:06:14.505 "bdev_lvol_delete", 00:06:14.505 "bdev_lvol_set_read_only", 00:06:14.505 "bdev_lvol_resize", 00:06:14.505 "bdev_lvol_decouple_parent", 00:06:14.505 "bdev_lvol_inflate", 00:06:14.505 "bdev_lvol_rename", 00:06:14.505 "bdev_lvol_clone_bdev", 00:06:14.505 "bdev_lvol_clone", 00:06:14.505 "bdev_lvol_snapshot", 00:06:14.505 "bdev_lvol_create", 00:06:14.505 "bdev_lvol_delete_lvstore", 00:06:14.505 "bdev_lvol_rename_lvstore", 00:06:14.505 "bdev_lvol_create_lvstore", 00:06:14.505 "bdev_raid_set_options", 00:06:14.505 "bdev_raid_remove_base_bdev", 00:06:14.505 "bdev_raid_add_base_bdev", 00:06:14.505 "bdev_raid_delete", 00:06:14.505 "bdev_raid_create", 00:06:14.505 "bdev_raid_get_bdevs", 00:06:14.505 "bdev_error_inject_error", 00:06:14.505 "bdev_error_delete", 00:06:14.505 "bdev_error_create", 00:06:14.505 "bdev_split_delete", 00:06:14.505 "bdev_split_create", 00:06:14.505 "bdev_delay_delete", 00:06:14.505 "bdev_delay_create", 00:06:14.505 "bdev_delay_update_latency", 00:06:14.505 "bdev_zone_block_delete", 00:06:14.505 "bdev_zone_block_create", 00:06:14.505 "blobfs_create", 00:06:14.505 "blobfs_detect", 00:06:14.505 "blobfs_set_cache_size", 00:06:14.505 "bdev_aio_delete", 00:06:14.505 "bdev_aio_rescan", 00:06:14.505 "bdev_aio_create", 00:06:14.505 "bdev_ftl_set_property", 00:06:14.505 "bdev_ftl_get_properties", 00:06:14.505 "bdev_ftl_get_stats", 00:06:14.505 "bdev_ftl_unmap", 00:06:14.505 "bdev_ftl_unload", 00:06:14.505 "bdev_ftl_delete", 00:06:14.505 "bdev_ftl_load", 00:06:14.505 "bdev_ftl_create", 00:06:14.505 "bdev_virtio_attach_controller", 00:06:14.505 "bdev_virtio_scsi_get_devices", 00:06:14.505 "bdev_virtio_detach_controller", 00:06:14.505 "bdev_virtio_blk_set_hotplug", 00:06:14.505 "bdev_iscsi_delete", 00:06:14.505 "bdev_iscsi_create", 00:06:14.505 "bdev_iscsi_set_options", 00:06:14.505 "accel_error_inject_error", 00:06:14.505 "ioat_scan_accel_module", 00:06:14.505 "dsa_scan_accel_module", 00:06:14.505 "iaa_scan_accel_module", 00:06:14.505 "keyring_file_remove_key", 00:06:14.505 "keyring_file_add_key", 00:06:14.505 "keyring_linux_set_options", 00:06:14.505 "iscsi_get_histogram", 00:06:14.505 "iscsi_enable_histogram", 00:06:14.505 "iscsi_set_options", 00:06:14.505 "iscsi_get_auth_groups", 00:06:14.505 "iscsi_auth_group_remove_secret", 00:06:14.505 "iscsi_auth_group_add_secret", 00:06:14.505 "iscsi_delete_auth_group", 00:06:14.505 "iscsi_create_auth_group", 00:06:14.505 "iscsi_set_discovery_auth", 00:06:14.505 "iscsi_get_options", 00:06:14.505 "iscsi_target_node_request_logout", 00:06:14.505 "iscsi_target_node_set_redirect", 00:06:14.505 "iscsi_target_node_set_auth", 00:06:14.505 "iscsi_target_node_add_lun", 00:06:14.505 "iscsi_get_stats", 00:06:14.505 "iscsi_get_connections", 00:06:14.505 "iscsi_portal_group_set_auth", 00:06:14.505 "iscsi_start_portal_group", 00:06:14.505 "iscsi_delete_portal_group", 00:06:14.505 "iscsi_create_portal_group", 00:06:14.505 "iscsi_get_portal_groups", 00:06:14.505 "iscsi_delete_target_node", 00:06:14.505 "iscsi_target_node_remove_pg_ig_maps", 00:06:14.505 "iscsi_target_node_add_pg_ig_maps", 00:06:14.505 "iscsi_create_target_node", 00:06:14.505 "iscsi_get_target_nodes", 00:06:14.505 "iscsi_delete_initiator_group", 00:06:14.505 "iscsi_initiator_group_remove_initiators", 00:06:14.505 "iscsi_initiator_group_add_initiators", 00:06:14.505 "iscsi_create_initiator_group", 00:06:14.505 "iscsi_get_initiator_groups", 00:06:14.505 "nvmf_set_crdt", 00:06:14.505 "nvmf_set_config", 00:06:14.505 "nvmf_set_max_subsystems", 00:06:14.505 "nvmf_stop_mdns_prr", 00:06:14.505 "nvmf_publish_mdns_prr", 00:06:14.505 "nvmf_subsystem_get_listeners", 00:06:14.505 "nvmf_subsystem_get_qpairs", 00:06:14.505 "nvmf_subsystem_get_controllers", 00:06:14.505 "nvmf_get_stats", 00:06:14.505 "nvmf_get_transports", 00:06:14.505 "nvmf_create_transport", 00:06:14.505 "nvmf_get_targets", 00:06:14.505 "nvmf_delete_target", 00:06:14.505 "nvmf_create_target", 00:06:14.505 "nvmf_subsystem_allow_any_host", 00:06:14.505 "nvmf_subsystem_remove_host", 00:06:14.505 "nvmf_subsystem_add_host", 00:06:14.505 "nvmf_ns_remove_host", 00:06:14.505 "nvmf_ns_add_host", 00:06:14.505 "nvmf_subsystem_remove_ns", 00:06:14.505 "nvmf_subsystem_add_ns", 00:06:14.505 "nvmf_subsystem_listener_set_ana_state", 00:06:14.505 "nvmf_discovery_get_referrals", 00:06:14.505 "nvmf_discovery_remove_referral", 00:06:14.505 "nvmf_discovery_add_referral", 00:06:14.505 "nvmf_subsystem_remove_listener", 00:06:14.505 "nvmf_subsystem_add_listener", 00:06:14.505 "nvmf_delete_subsystem", 00:06:14.505 "nvmf_create_subsystem", 00:06:14.505 "nvmf_get_subsystems", 00:06:14.505 "env_dpdk_get_mem_stats", 00:06:14.505 "nbd_get_disks", 00:06:14.505 "nbd_stop_disk", 00:06:14.505 "nbd_start_disk", 00:06:14.505 "ublk_recover_disk", 00:06:14.505 "ublk_get_disks", 00:06:14.505 "ublk_stop_disk", 00:06:14.505 "ublk_start_disk", 00:06:14.505 "ublk_destroy_target", 00:06:14.505 "ublk_create_target", 00:06:14.505 "virtio_blk_create_transport", 00:06:14.505 "virtio_blk_get_transports", 00:06:14.505 "vhost_controller_set_coalescing", 00:06:14.505 "vhost_get_controllers", 00:06:14.505 "vhost_delete_controller", 00:06:14.505 "vhost_create_blk_controller", 00:06:14.506 "vhost_scsi_controller_remove_target", 00:06:14.506 "vhost_scsi_controller_add_target", 00:06:14.506 "vhost_start_scsi_controller", 00:06:14.506 "vhost_create_scsi_controller", 00:06:14.506 "thread_set_cpumask", 00:06:14.506 "framework_get_governor", 00:06:14.506 "framework_get_scheduler", 00:06:14.506 "framework_set_scheduler", 00:06:14.506 "framework_get_reactors", 00:06:14.506 "thread_get_io_channels", 00:06:14.506 "thread_get_pollers", 00:06:14.506 "thread_get_stats", 00:06:14.506 "framework_monitor_context_switch", 00:06:14.506 "spdk_kill_instance", 00:06:14.506 "log_enable_timestamps", 00:06:14.506 "log_get_flags", 00:06:14.506 "log_clear_flag", 00:06:14.506 "log_set_flag", 00:06:14.506 "log_get_level", 00:06:14.506 "log_set_level", 00:06:14.506 "log_get_print_level", 00:06:14.506 "log_set_print_level", 00:06:14.506 "framework_enable_cpumask_locks", 00:06:14.506 "framework_disable_cpumask_locks", 00:06:14.506 "framework_wait_init", 00:06:14.506 "framework_start_init", 00:06:14.506 "scsi_get_devices", 00:06:14.506 "bdev_get_histogram", 00:06:14.506 "bdev_enable_histogram", 00:06:14.506 "bdev_set_qos_limit", 00:06:14.506 "bdev_set_qd_sampling_period", 00:06:14.506 "bdev_get_bdevs", 00:06:14.506 "bdev_reset_iostat", 00:06:14.506 "bdev_get_iostat", 00:06:14.506 "bdev_examine", 00:06:14.506 "bdev_wait_for_examine", 00:06:14.506 "bdev_set_options", 00:06:14.506 "notify_get_notifications", 00:06:14.506 "notify_get_types", 00:06:14.506 "accel_get_stats", 00:06:14.506 "accel_set_options", 00:06:14.506 "accel_set_driver", 00:06:14.506 "accel_crypto_key_destroy", 00:06:14.506 "accel_crypto_keys_get", 00:06:14.506 "accel_crypto_key_create", 00:06:14.506 "accel_assign_opc", 00:06:14.506 "accel_get_module_info", 00:06:14.506 "accel_get_opc_assignments", 00:06:14.506 "vmd_rescan", 00:06:14.506 "vmd_remove_device", 00:06:14.506 "vmd_enable", 00:06:14.506 "sock_get_default_impl", 00:06:14.506 "sock_set_default_impl", 00:06:14.506 "sock_impl_set_options", 00:06:14.506 "sock_impl_get_options", 00:06:14.506 "iobuf_get_stats", 00:06:14.506 "iobuf_set_options", 00:06:14.506 "framework_get_pci_devices", 00:06:14.506 "framework_get_config", 00:06:14.506 "framework_get_subsystems", 00:06:14.506 "trace_get_info", 00:06:14.506 "trace_get_tpoint_group_mask", 00:06:14.506 "trace_disable_tpoint_group", 00:06:14.506 "trace_enable_tpoint_group", 00:06:14.506 "trace_clear_tpoint_mask", 00:06:14.506 "trace_set_tpoint_mask", 00:06:14.506 "keyring_get_keys", 00:06:14.506 "spdk_get_version", 00:06:14.506 "rpc_get_methods" 00:06:14.506 ] 00:06:14.506 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.506 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:14.506 09:54:59 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2392282 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2392282 ']' 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2392282 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392282 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392282' 00:06:14.506 killing process with pid 2392282 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2392282 00:06:14.506 09:54:59 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2392282 00:06:14.765 00:06:14.765 real 0m1.509s 00:06:14.765 user 0m2.791s 00:06:14.765 sys 0m0.437s 00:06:14.765 09:54:59 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.765 09:54:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.765 ************************************ 00:06:14.765 END TEST spdkcli_tcp 00:06:14.765 ************************************ 00:06:14.765 09:54:59 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:14.765 09:54:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.765 09:54:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.765 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.765 ************************************ 00:06:14.765 START TEST dpdk_mem_utility 00:06:14.765 ************************************ 00:06:14.765 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:15.024 * Looking for test storage... 00:06:15.024 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:15.024 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.024 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2392692 00:06:15.024 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:15.024 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2392692 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2392692 ']' 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:15.024 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.024 [2024-07-25 09:55:00.060410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:15.024 [2024-07-25 09:55:00.060452] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392692 ] 00:06:15.024 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.024 [2024-07-25 09:55:00.114486] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.283 [2024-07-25 09:55:00.187645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.851 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:15.851 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:15.851 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:15.851 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:15.851 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:15.851 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:15.851 { 00:06:15.851 "filename": "/tmp/spdk_mem_dump.txt" 00:06:15.851 } 00:06:15.851 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:15.851 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:15.851 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:15.851 1 heaps totaling size 814.000000 MiB 00:06:15.851 size: 814.000000 MiB heap id: 0 00:06:15.851 end heaps---------- 00:06:15.851 8 mempools totaling size 598.116089 MiB 00:06:15.852 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:15.852 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:15.852 size: 84.521057 MiB name: bdev_io_2392692 00:06:15.852 size: 51.011292 MiB name: evtpool_2392692 00:06:15.852 size: 50.003479 MiB name: msgpool_2392692 00:06:15.852 size: 21.763794 MiB name: PDU_Pool 00:06:15.852 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:15.852 size: 0.026123 MiB name: Session_Pool 00:06:15.852 end mempools------- 00:06:15.852 6 memzones totaling size 4.142822 MiB 00:06:15.852 size: 1.000366 MiB name: RG_ring_0_2392692 00:06:15.852 size: 1.000366 MiB name: RG_ring_1_2392692 00:06:15.852 size: 1.000366 MiB name: RG_ring_4_2392692 00:06:15.852 size: 1.000366 MiB name: RG_ring_5_2392692 00:06:15.852 size: 0.125366 MiB name: RG_ring_2_2392692 00:06:15.852 size: 0.015991 MiB name: RG_ring_3_2392692 00:06:15.852 end memzones------- 00:06:15.852 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:15.852 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:15.852 list of free elements. size: 12.519348 MiB 00:06:15.852 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:15.852 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:15.852 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:15.852 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:15.852 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:15.852 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:15.852 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:15.852 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:15.852 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:15.852 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:15.852 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:15.852 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:15.852 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:15.852 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:15.852 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:15.852 list of standard malloc elements. size: 199.218079 MiB 00:06:15.852 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:15.852 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:15.852 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:15.852 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:15.852 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:15.852 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:15.852 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:15.852 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:15.852 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:15.852 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:15.852 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:15.852 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:15.852 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:15.852 list of memzone associated elements. size: 602.262573 MiB 00:06:15.852 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:15.852 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:15.852 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:15.852 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:15.852 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:15.852 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2392692_0 00:06:15.852 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:15.852 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2392692_0 00:06:15.852 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:15.852 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2392692_0 00:06:15.852 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:15.852 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:15.852 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:15.852 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:15.852 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:15.852 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2392692 00:06:15.852 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:15.852 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2392692 00:06:15.852 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:15.852 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2392692 00:06:15.852 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:15.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:15.852 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:15.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:15.852 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:15.852 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:15.852 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:15.852 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:15.852 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:15.852 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2392692 00:06:15.852 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:15.852 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2392692 00:06:15.852 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:15.852 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2392692 00:06:15.852 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:15.852 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2392692 00:06:15.852 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:15.852 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2392692 00:06:15.852 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:15.852 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:15.852 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:15.852 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:15.852 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:15.852 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:15.852 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:15.852 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2392692 00:06:15.852 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:15.852 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:15.852 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:15.852 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:15.852 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:15.852 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2392692 00:06:15.852 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:15.852 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:15.852 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:15.852 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2392692 00:06:15.852 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:15.852 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2392692 00:06:15.852 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:15.852 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:15.852 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:15.852 09:55:00 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2392692 00:06:15.852 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2392692 ']' 00:06:15.852 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2392692 00:06:15.852 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:15.852 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:15.852 09:55:00 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2392692 00:06:16.112 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:16.112 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:16.112 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2392692' 00:06:16.112 killing process with pid 2392692 00:06:16.112 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2392692 00:06:16.112 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2392692 00:06:16.371 00:06:16.371 real 0m1.405s 00:06:16.371 user 0m1.489s 00:06:16.371 sys 0m0.392s 00:06:16.372 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.372 09:55:01 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:16.372 ************************************ 00:06:16.372 END TEST dpdk_mem_utility 00:06:16.372 ************************************ 00:06:16.372 09:55:01 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:16.372 09:55:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:16.372 09:55:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.372 09:55:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.372 ************************************ 00:06:16.372 START TEST event 00:06:16.372 ************************************ 00:06:16.372 09:55:01 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:16.372 * Looking for test storage... 00:06:16.372 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:16.372 09:55:01 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:16.372 09:55:01 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:16.372 09:55:01 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.372 09:55:01 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:16.372 09:55:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.372 09:55:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.372 ************************************ 00:06:16.372 START TEST event_perf 00:06:16.372 ************************************ 00:06:16.372 09:55:01 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:16.372 Running I/O for 1 seconds...[2024-07-25 09:55:01.524871] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.372 [2024-07-25 09:55:01.524943] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392976 ] 00:06:16.630 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.630 [2024-07-25 09:55:01.582829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.630 [2024-07-25 09:55:01.658780] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.630 [2024-07-25 09:55:01.658885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.630 [2024-07-25 09:55:01.658991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.630 [2024-07-25 09:55:01.658998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.566 Running I/O for 1 seconds... 00:06:17.566 lcore 0: 213854 00:06:17.566 lcore 1: 213853 00:06:17.566 lcore 2: 213855 00:06:17.566 lcore 3: 213854 00:06:17.566 done. 00:06:17.825 00:06:17.825 real 0m1.225s 00:06:17.825 user 0m4.145s 00:06:17.825 sys 0m0.074s 00:06:17.825 09:55:02 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.825 09:55:02 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.825 ************************************ 00:06:17.825 END TEST event_perf 00:06:17.825 ************************************ 00:06:17.825 09:55:02 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.825 09:55:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:17.825 09:55:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.825 09:55:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.825 ************************************ 00:06:17.825 START TEST event_reactor 00:06:17.825 ************************************ 00:06:17.825 09:55:02 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:17.825 [2024-07-25 09:55:02.818751] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:17.825 [2024-07-25 09:55:02.818818] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393226 ] 00:06:17.825 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.825 [2024-07-25 09:55:02.889087] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.825 [2024-07-25 09:55:02.961536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.268 test_start 00:06:19.268 oneshot 00:06:19.268 tick 100 00:06:19.268 tick 100 00:06:19.268 tick 250 00:06:19.268 tick 100 00:06:19.268 tick 100 00:06:19.268 tick 100 00:06:19.268 tick 250 00:06:19.268 tick 500 00:06:19.268 tick 100 00:06:19.268 tick 100 00:06:19.268 tick 250 00:06:19.268 tick 100 00:06:19.268 tick 100 00:06:19.268 test_end 00:06:19.268 00:06:19.268 real 0m1.232s 00:06:19.268 user 0m1.147s 00:06:19.268 sys 0m0.081s 00:06:19.268 09:55:04 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.268 09:55:04 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:19.268 ************************************ 00:06:19.268 END TEST event_reactor 00:06:19.268 ************************************ 00:06:19.268 09:55:04 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.268 09:55:04 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:19.268 09:55:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.268 09:55:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.268 ************************************ 00:06:19.268 START TEST event_reactor_perf 00:06:19.268 ************************************ 00:06:19.268 09:55:04 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:19.268 [2024-07-25 09:55:04.119233] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.268 [2024-07-25 09:55:04.119302] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393480 ] 00:06:19.268 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.268 [2024-07-25 09:55:04.189462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.268 [2024-07-25 09:55:04.260528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.244 test_start 00:06:20.244 test_end 00:06:20.244 Performance: 520050 events per second 00:06:20.244 00:06:20.244 real 0m1.229s 00:06:20.244 user 0m1.138s 00:06:20.244 sys 0m0.087s 00:06:20.244 09:55:05 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.244 09:55:05 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:20.244 ************************************ 00:06:20.244 END TEST event_reactor_perf 00:06:20.244 ************************************ 00:06:20.244 09:55:05 event -- event/event.sh@49 -- # uname -s 00:06:20.244 09:55:05 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:20.244 09:55:05 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.244 09:55:05 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.244 09:55:05 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.244 09:55:05 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.244 ************************************ 00:06:20.244 START TEST event_scheduler 00:06:20.244 ************************************ 00:06:20.244 09:55:05 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:20.503 * Looking for test storage... 00:06:20.503 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:20.503 09:55:05 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:20.503 09:55:05 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2393757 00:06:20.503 09:55:05 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:20.503 09:55:05 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:20.503 09:55:05 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2393757 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2393757 ']' 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:20.503 09:55:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:20.503 [2024-07-25 09:55:05.536823] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:20.503 [2024-07-25 09:55:05.536867] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393757 ] 00:06:20.503 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.503 [2024-07-25 09:55:05.605016] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:20.761 [2024-07-25 09:55:05.686768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.761 [2024-07-25 09:55:05.686877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:20.761 [2024-07-25 09:55:05.686902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.761 [2024-07-25 09:55:05.686903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:21.329 09:55:06 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.329 [2024-07-25 09:55:06.353342] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:21.329 [2024-07-25 09:55:06.353358] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:21.329 [2024-07-25 09:55:06.353367] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:21.329 [2024-07-25 09:55:06.353372] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:21.329 [2024-07-25 09:55:06.353377] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.329 09:55:06 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.329 [2024-07-25 09:55:06.424611] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.329 09:55:06 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:21.329 ************************************ 00:06:21.329 START TEST scheduler_create_thread 00:06:21.329 ************************************ 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.329 2 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.329 3 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.329 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 4 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 5 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 6 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 7 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 8 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 9 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 10 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.588 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.964 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.964 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:22.964 09:55:08 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:22.964 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.964 09:55:08 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.342 00:06:24.342 real 0m2.620s 00:06:24.342 user 0m0.023s 00:06:24.342 sys 0m0.005s 00:06:24.342 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.342 09:55:09 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:24.342 ************************************ 00:06:24.342 END TEST scheduler_create_thread 00:06:24.342 ************************************ 00:06:24.342 09:55:09 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:24.342 09:55:09 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2393757 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2393757 ']' 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2393757 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2393757 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2393757' 00:06:24.342 killing process with pid 2393757 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2393757 00:06:24.342 09:55:09 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2393757 00:06:24.601 [2024-07-25 09:55:09.562544] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:24.601 00:06:24.601 real 0m4.349s 00:06:24.601 user 0m8.201s 00:06:24.601 sys 0m0.373s 00:06:24.601 09:55:09 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.601 09:55:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:24.601 ************************************ 00:06:24.601 END TEST event_scheduler 00:06:24.601 ************************************ 00:06:24.860 09:55:09 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:24.860 09:55:09 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:24.860 09:55:09 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.860 09:55:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.860 09:55:09 event -- common/autotest_common.sh@10 -- # set +x 00:06:24.860 ************************************ 00:06:24.860 START TEST app_repeat 00:06:24.860 ************************************ 00:06:24.860 09:55:09 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:24.860 09:55:09 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.860 09:55:09 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2394504 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2394504' 00:06:24.861 Process app_repeat pid: 2394504 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:24.861 spdk_app_start Round 0 00:06:24.861 09:55:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2394504 /var/tmp/spdk-nbd.sock 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2394504 ']' 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:24.861 09:55:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.861 [2024-07-25 09:55:09.865256] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:24.861 [2024-07-25 09:55:09.865310] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394504 ] 00:06:24.861 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.861 [2024-07-25 09:55:09.932838] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:24.861 [2024-07-25 09:55:10.014227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.861 [2024-07-25 09:55:10.014228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.797 09:55:10 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:25.797 09:55:10 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:25.797 09:55:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:25.797 Malloc0 00:06:25.797 09:55:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:26.056 Malloc1 00:06:26.056 09:55:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.056 09:55:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:26.315 /dev/nbd0 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.315 1+0 records in 00:06:26.315 1+0 records out 00:06:26.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226101 s, 18.1 MB/s 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:26.315 /dev/nbd1 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:26.315 09:55:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:26.315 09:55:11 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:26.574 1+0 records in 00:06:26.574 1+0 records out 00:06:26.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236927 s, 17.3 MB/s 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:26.574 09:55:11 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:26.574 { 00:06:26.574 "nbd_device": "/dev/nbd0", 00:06:26.574 "bdev_name": "Malloc0" 00:06:26.574 }, 00:06:26.574 { 00:06:26.574 "nbd_device": "/dev/nbd1", 00:06:26.574 "bdev_name": "Malloc1" 00:06:26.574 } 00:06:26.574 ]' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:26.574 { 00:06:26.574 "nbd_device": "/dev/nbd0", 00:06:26.574 "bdev_name": "Malloc0" 00:06:26.574 }, 00:06:26.574 { 00:06:26.574 "nbd_device": "/dev/nbd1", 00:06:26.574 "bdev_name": "Malloc1" 00:06:26.574 } 00:06:26.574 ]' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.574 /dev/nbd1' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.574 /dev/nbd1' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.574 09:55:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.575 09:55:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.833 256+0 records in 00:06:26.833 256+0 records out 00:06:26.833 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103763 s, 101 MB/s 00:06:26.833 09:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.834 256+0 records in 00:06:26.834 256+0 records out 00:06:26.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013689 s, 76.6 MB/s 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.834 256+0 records in 00:06:26.834 256+0 records out 00:06:26.834 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145345 s, 72.1 MB/s 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.834 09:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.092 09:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.351 09:55:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.351 09:55:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.610 09:55:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:27.869 [2024-07-25 09:55:12.787909] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.869 [2024-07-25 09:55:12.854453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.869 [2024-07-25 09:55:12.854455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.869 [2024-07-25 09:55:12.894612] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.869 [2024-07-25 09:55:12.894651] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:31.151 09:55:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:31.151 09:55:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:31.151 spdk_app_start Round 1 00:06:31.151 09:55:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2394504 /var/tmp/spdk-nbd.sock 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2394504 ']' 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.151 09:55:15 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.151 09:55:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.151 Malloc0 00:06:31.151 09:55:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:31.151 Malloc1 00:06:31.151 09:55:16 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.151 09:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:31.410 /dev/nbd0 00:06:31.410 09:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:31.410 09:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.410 1+0 records in 00:06:31.410 1+0 records out 00:06:31.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000240856 s, 17.0 MB/s 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.410 09:55:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.410 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.410 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.410 09:55:16 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.410 /dev/nbd1 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.668 1+0 records in 00:06:31.668 1+0 records out 00:06:31.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198545 s, 20.6 MB/s 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:31.668 09:55:16 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.668 { 00:06:31.668 "nbd_device": "/dev/nbd0", 00:06:31.668 "bdev_name": "Malloc0" 00:06:31.668 }, 00:06:31.668 { 00:06:31.668 "nbd_device": "/dev/nbd1", 00:06:31.668 "bdev_name": "Malloc1" 00:06:31.668 } 00:06:31.668 ]' 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.668 { 00:06:31.668 "nbd_device": "/dev/nbd0", 00:06:31.668 "bdev_name": "Malloc0" 00:06:31.668 }, 00:06:31.668 { 00:06:31.668 "nbd_device": "/dev/nbd1", 00:06:31.668 "bdev_name": "Malloc1" 00:06:31.668 } 00:06:31.668 ]' 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.668 /dev/nbd1' 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.668 /dev/nbd1' 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.668 09:55:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.669 09:55:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.927 256+0 records in 00:06:31.927 256+0 records out 00:06:31.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103642 s, 101 MB/s 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.927 256+0 records in 00:06:31.927 256+0 records out 00:06:31.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134399 s, 78.0 MB/s 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.927 256+0 records in 00:06:31.927 256+0 records out 00:06:31.927 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145751 s, 71.9 MB/s 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.927 09:55:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.927 09:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.185 09:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.186 09:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.444 09:55:17 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.444 09:55:17 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.702 09:55:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.961 [2024-07-25 09:55:17.908473] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.961 [2024-07-25 09:55:17.976256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.961 [2024-07-25 09:55:17.976257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.961 [2024-07-25 09:55:18.016778] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.961 [2024-07-25 09:55:18.016817] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.245 09:55:20 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:36.245 09:55:20 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:36.245 spdk_app_start Round 2 00:06:36.245 09:55:20 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2394504 /var/tmp/spdk-nbd.sock 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2394504 ']' 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.245 09:55:20 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:36.245 09:55:20 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.245 Malloc0 00:06:36.245 09:55:21 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:36.245 Malloc1 00:06:36.245 09:55:21 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.245 09:55:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:36.503 /dev/nbd0 00:06:36.503 09:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.503 09:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.503 1+0 records in 00:06:36.503 1+0 records out 00:06:36.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000157084 s, 26.1 MB/s 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.503 09:55:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.503 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.503 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.503 09:55:21 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.762 /dev/nbd1 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.762 1+0 records in 00:06:36.762 1+0 records out 00:06:36.762 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198549 s, 20.6 MB/s 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.762 09:55:21 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.762 { 00:06:36.762 "nbd_device": "/dev/nbd0", 00:06:36.762 "bdev_name": "Malloc0" 00:06:36.762 }, 00:06:36.762 { 00:06:36.762 "nbd_device": "/dev/nbd1", 00:06:36.762 "bdev_name": "Malloc1" 00:06:36.762 } 00:06:36.762 ]' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.762 { 00:06:36.762 "nbd_device": "/dev/nbd0", 00:06:36.762 "bdev_name": "Malloc0" 00:06:36.762 }, 00:06:36.762 { 00:06:36.762 "nbd_device": "/dev/nbd1", 00:06:36.762 "bdev_name": "Malloc1" 00:06:36.762 } 00:06:36.762 ]' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.762 /dev/nbd1' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.762 /dev/nbd1' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.762 09:55:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.763 09:55:21 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:37.021 256+0 records in 00:06:37.021 256+0 records out 00:06:37.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103197 s, 102 MB/s 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.021 256+0 records in 00:06:37.021 256+0 records out 00:06:37.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013791 s, 76.0 MB/s 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.021 256+0 records in 00:06:37.021 256+0 records out 00:06:37.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148982 s, 70.4 MB/s 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.021 09:55:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.021 09:55:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.280 09:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.567 09:55:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.567 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.567 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.568 09:55:22 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.568 09:55:22 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.827 09:55:22 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.827 [2024-07-25 09:55:22.981899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:38.085 [2024-07-25 09:55:23.049820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.085 [2024-07-25 09:55:23.049821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.085 [2024-07-25 09:55:23.089660] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:38.085 [2024-07-25 09:55:23.089709] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:41.369 09:55:25 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2394504 /var/tmp/spdk-nbd.sock 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2394504 ']' 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.369 09:55:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:41.369 09:55:26 event.app_repeat -- event/event.sh@39 -- # killprocess 2394504 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2394504 ']' 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2394504 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2394504 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2394504' 00:06:41.369 killing process with pid 2394504 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2394504 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2394504 00:06:41.369 spdk_app_start is called in Round 0. 00:06:41.369 Shutdown signal received, stop current app iteration 00:06:41.369 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:41.369 spdk_app_start is called in Round 1. 00:06:41.369 Shutdown signal received, stop current app iteration 00:06:41.369 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:41.369 spdk_app_start is called in Round 2. 00:06:41.369 Shutdown signal received, stop current app iteration 00:06:41.369 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:41.369 spdk_app_start is called in Round 3. 00:06:41.369 Shutdown signal received, stop current app iteration 00:06:41.369 09:55:26 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:41.369 09:55:26 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:41.369 00:06:41.369 real 0m16.386s 00:06:41.369 user 0m35.572s 00:06:41.369 sys 0m2.369s 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.369 09:55:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:41.369 ************************************ 00:06:41.369 END TEST app_repeat 00:06:41.369 ************************************ 00:06:41.369 09:55:26 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:41.369 09:55:26 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.369 09:55:26 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.369 09:55:26 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.369 09:55:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.369 ************************************ 00:06:41.369 START TEST cpu_locks 00:06:41.369 ************************************ 00:06:41.369 09:55:26 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:41.369 * Looking for test storage... 00:06:41.369 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:41.369 09:55:26 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:41.369 09:55:26 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:41.369 09:55:26 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:41.369 09:55:26 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:41.369 09:55:26 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:41.369 09:55:26 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:41.369 09:55:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.369 ************************************ 00:06:41.369 START TEST default_locks 00:06:41.369 ************************************ 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2397493 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2397493 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2397493 ']' 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.369 09:55:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.369 [2024-07-25 09:55:26.454698] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:41.369 [2024-07-25 09:55:26.454739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397493 ] 00:06:41.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.369 [2024-07-25 09:55:26.518274] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.628 [2024-07-25 09:55:26.598644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.196 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.196 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:42.196 09:55:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2397493 00:06:42.196 09:55:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2397493 00:06:42.196 09:55:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.763 lslocks: write error 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2397493 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2397493 ']' 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2397493 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2397493 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2397493' 00:06:42.763 killing process with pid 2397493 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2397493 00:06:42.763 09:55:27 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2397493 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2397493 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2397493 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2397493 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2397493 ']' 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.023 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2397493) - No such process 00:06:43.023 ERROR: process (pid: 2397493) is no longer running 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.023 00:06:43.023 real 0m1.653s 00:06:43.023 user 0m1.734s 00:06:43.023 sys 0m0.544s 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:43.023 09:55:28 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.023 ************************************ 00:06:43.023 END TEST default_locks 00:06:43.023 ************************************ 00:06:43.023 09:55:28 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:43.023 09:55:28 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:43.023 09:55:28 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:43.023 09:55:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:43.023 ************************************ 00:06:43.023 START TEST default_locks_via_rpc 00:06:43.023 ************************************ 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2397762 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2397762 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2397762 ']' 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.023 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.023 [2024-07-25 09:55:28.172858] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:43.023 [2024-07-25 09:55:28.172895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397762 ] 00:06:43.282 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.282 [2024-07-25 09:55:28.237300] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.282 [2024-07-25 09:55:28.305785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2397762 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2397762 00:06:43.850 09:55:28 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2397762 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2397762 ']' 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2397762 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2397762 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2397762' 00:06:44.418 killing process with pid 2397762 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2397762 00:06:44.418 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2397762 00:06:44.678 00:06:44.678 real 0m1.633s 00:06:44.678 user 0m1.718s 00:06:44.678 sys 0m0.532s 00:06:44.678 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:44.678 09:55:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:44.678 ************************************ 00:06:44.678 END TEST default_locks_via_rpc 00:06:44.678 ************************************ 00:06:44.678 09:55:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:44.678 09:55:29 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:44.678 09:55:29 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:44.678 09:55:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.678 ************************************ 00:06:44.678 START TEST non_locking_app_on_locked_coremask 00:06:44.678 ************************************ 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2398033 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2398033 /var/tmp/spdk.sock 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2398033 ']' 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:44.678 09:55:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.937 [2024-07-25 09:55:29.873397] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:44.937 [2024-07-25 09:55:29.873436] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398033 ] 00:06:44.937 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.937 [2024-07-25 09:55:29.939179] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.937 [2024-07-25 09:55:30.014183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.873 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.873 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:45.873 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2398256 00:06:45.873 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2398256 /var/tmp/spdk2.sock 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2398256 ']' 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.874 09:55:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.874 [2024-07-25 09:55:30.720229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.874 [2024-07-25 09:55:30.720276] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398256 ] 00:06:45.874 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.874 [2024-07-25 09:55:30.792268] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:45.874 [2024-07-25 09:55:30.792291] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.874 [2024-07-25 09:55:30.931607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.440 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.440 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:46.440 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2398033 00:06:46.440 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2398033 00:06:46.440 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.008 lslocks: write error 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2398033 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2398033 ']' 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2398033 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398033 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398033' 00:06:47.008 killing process with pid 2398033 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2398033 00:06:47.008 09:55:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2398033 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2398256 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2398256 ']' 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2398256 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398256 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398256' 00:06:47.576 killing process with pid 2398256 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2398256 00:06:47.576 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2398256 00:06:47.835 00:06:47.835 real 0m3.120s 00:06:47.835 user 0m3.320s 00:06:47.835 sys 0m0.907s 00:06:47.835 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.835 09:55:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.836 ************************************ 00:06:47.836 END TEST non_locking_app_on_locked_coremask 00:06:47.836 ************************************ 00:06:47.836 09:55:32 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:47.836 09:55:32 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.836 09:55:32 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.836 09:55:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.095 ************************************ 00:06:48.095 START TEST locking_app_on_unlocked_coremask 00:06:48.095 ************************************ 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2398744 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2398744 /var/tmp/spdk.sock 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2398744 ']' 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.095 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.095 [2024-07-25 09:55:33.059036] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:48.095 [2024-07-25 09:55:33.059071] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398744 ] 00:06:48.095 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.095 [2024-07-25 09:55:33.122865] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.095 [2024-07-25 09:55:33.122894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.095 [2024-07-25 09:55:33.200438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2398758 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2398758 /var/tmp/spdk2.sock 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2398758 ']' 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.031 09:55:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.031 [2024-07-25 09:55:33.884615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:49.031 [2024-07-25 09:55:33.884654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398758 ] 00:06:49.031 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.031 [2024-07-25 09:55:33.958458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.031 [2024-07-25 09:55:34.103204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.598 09:55:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:49.598 09:55:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:49.598 09:55:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2398758 00:06:49.598 09:55:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2398758 00:06:49.598 09:55:34 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:50.534 lslocks: write error 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2398744 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2398744 ']' 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2398744 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398744 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398744' 00:06:50.534 killing process with pid 2398744 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2398744 00:06:50.534 09:55:35 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2398744 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2398758 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2398758 ']' 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2398758 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2398758 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2398758' 00:06:51.103 killing process with pid 2398758 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2398758 00:06:51.103 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2398758 00:06:51.362 00:06:51.362 real 0m3.367s 00:06:51.362 user 0m3.582s 00:06:51.362 sys 0m0.978s 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 ************************************ 00:06:51.362 END TEST locking_app_on_unlocked_coremask 00:06:51.362 ************************************ 00:06:51.362 09:55:36 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.362 09:55:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.362 09:55:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.362 09:55:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 ************************************ 00:06:51.362 START TEST locking_app_on_locked_coremask 00:06:51.362 ************************************ 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2399250 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2399250 /var/tmp/spdk.sock 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2399250 ']' 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.362 09:55:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.362 [2024-07-25 09:55:36.495444] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:51.362 [2024-07-25 09:55:36.495494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399250 ] 00:06:51.362 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.621 [2024-07-25 09:55:36.560014] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.621 [2024-07-25 09:55:36.637669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2399476 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2399476 /var/tmp/spdk2.sock 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2399476 /var/tmp/spdk2.sock 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2399476 /var/tmp/spdk2.sock 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2399476 ']' 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.189 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.189 [2024-07-25 09:55:37.314222] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:52.189 [2024-07-25 09:55:37.314273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399476 ] 00:06:52.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.447 [2024-07-25 09:55:37.382990] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2399250 has claimed it. 00:06:52.447 [2024-07-25 09:55:37.383021] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.014 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2399476) - No such process 00:06:53.014 ERROR: process (pid: 2399476) is no longer running 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2399250 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.014 09:55:37 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2399250 00:06:53.273 lslocks: write error 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2399250 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2399250 ']' 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2399250 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399250 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399250' 00:06:53.273 killing process with pid 2399250 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2399250 00:06:53.273 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2399250 00:06:53.534 00:06:53.534 real 0m2.231s 00:06:53.534 user 0m2.430s 00:06:53.534 sys 0m0.598s 00:06:53.534 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.534 09:55:38 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.534 ************************************ 00:06:53.534 END TEST locking_app_on_locked_coremask 00:06:53.534 ************************************ 00:06:53.811 09:55:38 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:53.812 09:55:38 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:53.812 09:55:38 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.812 09:55:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:53.812 ************************************ 00:06:53.812 START TEST locking_overlapped_coremask 00:06:53.812 ************************************ 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2399738 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2399738 /var/tmp/spdk.sock 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2399738 ']' 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:53.812 09:55:38 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.812 [2024-07-25 09:55:38.788898] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:53.812 [2024-07-25 09:55:38.788938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399738 ] 00:06:53.812 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.812 [2024-07-25 09:55:38.855479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.812 [2024-07-25 09:55:38.929700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.812 [2024-07-25 09:55:38.929822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.812 [2024-07-25 09:55:38.929823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2399848 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2399848 /var/tmp/spdk2.sock 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2399848 /var/tmp/spdk2.sock 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2399848 /var/tmp/spdk2.sock 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2399848 ']' 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.765 09:55:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.765 [2024-07-25 09:55:39.645846] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:54.765 [2024-07-25 09:55:39.645895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399848 ] 00:06:54.765 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.765 [2024-07-25 09:55:39.721951] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2399738 has claimed it. 00:06:54.765 [2024-07-25 09:55:39.721992] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:55.333 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2399848) - No such process 00:06:55.333 ERROR: process (pid: 2399848) is no longer running 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2399738 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2399738 ']' 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2399738 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2399738 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2399738' 00:06:55.333 killing process with pid 2399738 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2399738 00:06:55.333 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2399738 00:06:55.592 00:06:55.592 real 0m1.899s 00:06:55.592 user 0m5.358s 00:06:55.592 sys 0m0.409s 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 END TEST locking_overlapped_coremask 00:06:55.592 ************************************ 00:06:55.592 09:55:40 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:55.592 09:55:40 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.592 09:55:40 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.592 09:55:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.592 ************************************ 00:06:55.592 START TEST locking_overlapped_coremask_via_rpc 00:06:55.592 ************************************ 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2400019 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2400019 /var/tmp/spdk.sock 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2400019 ']' 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.592 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.593 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.593 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.593 09:55:40 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:55.852 [2024-07-25 09:55:40.761263] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:55.852 [2024-07-25 09:55:40.761308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400019 ] 00:06:55.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.852 [2024-07-25 09:55:40.826494] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.852 [2024-07-25 09:55:40.826519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.852 [2024-07-25 09:55:40.907395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.852 [2024-07-25 09:55:40.907502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.852 [2024-07-25 09:55:40.907503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.419 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.419 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2400245 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2400245 /var/tmp/spdk2.sock 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2400245 ']' 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.420 09:55:41 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:56.678 [2024-07-25 09:55:41.617365] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:56.678 [2024-07-25 09:55:41.617416] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400245 ] 00:06:56.679 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.679 [2024-07-25 09:55:41.690431] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:56.679 [2024-07-25 09:55:41.690454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.937 [2024-07-25 09:55:41.843453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.937 [2024-07-25 09:55:41.847184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.937 [2024-07-25 09:55:41.847185] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.504 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.505 [2024-07-25 09:55:42.441196] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2400019 has claimed it. 00:06:57.505 request: 00:06:57.505 { 00:06:57.505 "method": "framework_enable_cpumask_locks", 00:06:57.505 "req_id": 1 00:06:57.505 } 00:06:57.505 Got JSON-RPC error response 00:06:57.505 response: 00:06:57.505 { 00:06:57.505 "code": -32603, 00:06:57.505 "message": "Failed to claim CPU core: 2" 00:06:57.505 } 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2400019 /var/tmp/spdk.sock 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2400019 ']' 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2400245 /var/tmp/spdk2.sock 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2400245 ']' 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.505 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.764 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.764 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:57.764 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:57.764 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.764 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.765 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.765 00:06:57.765 real 0m2.104s 00:06:57.765 user 0m0.866s 00:06:57.765 sys 0m0.179s 00:06:57.765 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:57.765 09:55:42 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:57.765 ************************************ 00:06:57.765 END TEST locking_overlapped_coremask_via_rpc 00:06:57.765 ************************************ 00:06:57.765 09:55:42 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:57.765 09:55:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2400019 ]] 00:06:57.765 09:55:42 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2400019 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2400019 ']' 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2400019 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400019 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400019' 00:06:57.765 killing process with pid 2400019 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2400019 00:06:57.765 09:55:42 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2400019 00:06:58.333 09:55:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2400245 ]] 00:06:58.333 09:55:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2400245 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2400245 ']' 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2400245 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2400245 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2400245' 00:06:58.333 killing process with pid 2400245 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2400245 00:06:58.333 09:55:43 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2400245 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2400019 ]] 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2400019 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2400019 ']' 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2400019 00:06:58.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2400019) - No such process 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2400019 is not found' 00:06:58.592 Process with pid 2400019 is not found 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2400245 ]] 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2400245 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2400245 ']' 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2400245 00:06:58.592 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2400245) - No such process 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2400245 is not found' 00:06:58.592 Process with pid 2400245 is not found 00:06:58.592 09:55:43 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:58.592 00:06:58.592 real 0m17.286s 00:06:58.592 user 0m29.458s 00:06:58.592 sys 0m5.042s 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.592 09:55:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 ************************************ 00:06:58.592 END TEST cpu_locks 00:06:58.592 ************************************ 00:06:58.592 00:06:58.592 real 0m42.215s 00:06:58.592 user 1m19.861s 00:06:58.592 sys 0m8.369s 00:06:58.592 09:55:43 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.592 09:55:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 ************************************ 00:06:58.592 END TEST event 00:06:58.592 ************************************ 00:06:58.592 09:55:43 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.592 09:55:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.592 09:55:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.592 09:55:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.592 ************************************ 00:06:58.592 START TEST thread 00:06:58.592 ************************************ 00:06:58.592 09:55:43 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:06:58.852 * Looking for test storage... 00:06:58.852 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:06:58.852 09:55:43 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.852 09:55:43 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:58.852 09:55:43 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.852 09:55:43 thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.852 ************************************ 00:06:58.852 START TEST thread_poller_perf 00:06:58.852 ************************************ 00:06:58.852 09:55:43 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:58.852 [2024-07-25 09:55:43.817096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.852 [2024-07-25 09:55:43.817203] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400704 ] 00:06:58.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.852 [2024-07-25 09:55:43.890818] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.852 [2024-07-25 09:55:43.967828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.852 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:00.229 ====================================== 00:07:00.229 busy:2107870928 (cyc) 00:07:00.229 total_run_count: 425000 00:07:00.229 tsc_hz: 2100000000 (cyc) 00:07:00.230 ====================================== 00:07:00.230 poller_cost: 4959 (cyc), 2361 (nsec) 00:07:00.230 00:07:00.230 real 0m1.249s 00:07:00.230 user 0m1.155s 00:07:00.230 sys 0m0.090s 00:07:00.230 09:55:45 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.230 09:55:45 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:00.230 ************************************ 00:07:00.230 END TEST thread_poller_perf 00:07:00.230 ************************************ 00:07:00.230 09:55:45 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.230 09:55:45 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:00.230 09:55:45 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.230 09:55:45 thread -- common/autotest_common.sh@10 -- # set +x 00:07:00.230 ************************************ 00:07:00.230 START TEST thread_poller_perf 00:07:00.230 ************************************ 00:07:00.230 09:55:45 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:00.230 [2024-07-25 09:55:45.129289] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:00.230 [2024-07-25 09:55:45.129345] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2400924 ] 00:07:00.230 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.230 [2024-07-25 09:55:45.197411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.230 [2024-07-25 09:55:45.267537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.230 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:01.606 ====================================== 00:07:01.606 busy:2101489488 (cyc) 00:07:01.606 total_run_count: 5678000 00:07:01.606 tsc_hz: 2100000000 (cyc) 00:07:01.606 ====================================== 00:07:01.606 poller_cost: 370 (cyc), 176 (nsec) 00:07:01.606 00:07:01.606 real 0m1.223s 00:07:01.606 user 0m1.135s 00:07:01.606 sys 0m0.084s 00:07:01.607 09:55:46 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.607 09:55:46 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:01.607 ************************************ 00:07:01.607 END TEST thread_poller_perf 00:07:01.607 ************************************ 00:07:01.607 09:55:46 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:01.607 00:07:01.607 real 0m2.699s 00:07:01.607 user 0m2.376s 00:07:01.607 sys 0m0.334s 00:07:01.607 09:55:46 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.607 09:55:46 thread -- common/autotest_common.sh@10 -- # set +x 00:07:01.607 ************************************ 00:07:01.607 END TEST thread 00:07:01.607 ************************************ 00:07:01.607 09:55:46 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:01.607 09:55:46 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.607 09:55:46 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.607 09:55:46 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.607 09:55:46 -- common/autotest_common.sh@10 -- # set +x 00:07:01.607 ************************************ 00:07:01.607 START TEST app_cmdline 00:07:01.607 ************************************ 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:07:01.607 * Looking for test storage... 00:07:01.607 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:01.607 09:55:46 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:01.607 09:55:46 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2401236 00:07:01.607 09:55:46 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2401236 00:07:01.607 09:55:46 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2401236 ']' 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.607 09:55:46 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:01.607 [2024-07-25 09:55:46.576735] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.607 [2024-07-25 09:55:46.576790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401236 ] 00:07:01.607 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.607 [2024-07-25 09:55:46.641665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.607 [2024-07-25 09:55:46.719848] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:02.543 { 00:07:02.543 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:02.543 "fields": { 00:07:02.543 "major": 24, 00:07:02.543 "minor": 9, 00:07:02.543 "patch": 0, 00:07:02.543 "suffix": "-pre", 00:07:02.543 "commit": "704257090" 00:07:02.543 } 00:07:02.543 } 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:02.543 09:55:47 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:07:02.543 09:55:47 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:02.803 request: 00:07:02.803 { 00:07:02.803 "method": "env_dpdk_get_mem_stats", 00:07:02.803 "req_id": 1 00:07:02.803 } 00:07:02.803 Got JSON-RPC error response 00:07:02.803 response: 00:07:02.803 { 00:07:02.803 "code": -32601, 00:07:02.803 "message": "Method not found" 00:07:02.803 } 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.803 09:55:47 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2401236 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2401236 ']' 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2401236 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2401236 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2401236' 00:07:02.803 killing process with pid 2401236 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@969 -- # kill 2401236 00:07:02.803 09:55:47 app_cmdline -- common/autotest_common.sh@974 -- # wait 2401236 00:07:03.062 00:07:03.062 real 0m1.670s 00:07:03.062 user 0m2.023s 00:07:03.062 sys 0m0.395s 00:07:03.062 09:55:48 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.062 09:55:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:03.062 ************************************ 00:07:03.062 END TEST app_cmdline 00:07:03.062 ************************************ 00:07:03.062 09:55:48 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:03.062 09:55:48 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:03.062 09:55:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.062 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:07:03.062 ************************************ 00:07:03.063 START TEST version 00:07:03.063 ************************************ 00:07:03.063 09:55:48 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:07:03.322 * Looking for test storage... 00:07:03.322 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:07:03.322 09:55:48 version -- app/version.sh@17 -- # get_header_version major 00:07:03.322 09:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # cut -f2 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.322 09:55:48 version -- app/version.sh@17 -- # major=24 00:07:03.322 09:55:48 version -- app/version.sh@18 -- # get_header_version minor 00:07:03.322 09:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # cut -f2 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.322 09:55:48 version -- app/version.sh@18 -- # minor=9 00:07:03.322 09:55:48 version -- app/version.sh@19 -- # get_header_version patch 00:07:03.322 09:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # cut -f2 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.322 09:55:48 version -- app/version.sh@19 -- # patch=0 00:07:03.322 09:55:48 version -- app/version.sh@20 -- # get_header_version suffix 00:07:03.322 09:55:48 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # cut -f2 00:07:03.322 09:55:48 version -- app/version.sh@14 -- # tr -d '"' 00:07:03.322 09:55:48 version -- app/version.sh@20 -- # suffix=-pre 00:07:03.322 09:55:48 version -- app/version.sh@22 -- # version=24.9 00:07:03.322 09:55:48 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:03.322 09:55:48 version -- app/version.sh@28 -- # version=24.9rc0 00:07:03.322 09:55:48 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:07:03.322 09:55:48 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:03.322 09:55:48 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:03.322 09:55:48 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:03.322 00:07:03.322 real 0m0.157s 00:07:03.322 user 0m0.084s 00:07:03.322 sys 0m0.110s 00:07:03.322 09:55:48 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.322 09:55:48 version -- common/autotest_common.sh@10 -- # set +x 00:07:03.322 ************************************ 00:07:03.322 END TEST version 00:07:03.322 ************************************ 00:07:03.322 09:55:48 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@202 -- # uname -s 00:07:03.322 09:55:48 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:03.322 09:55:48 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:03.322 09:55:48 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:03.322 09:55:48 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:03.322 09:55:48 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:03.322 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:07:03.322 09:55:48 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:03.322 09:55:48 -- spdk/autotest.sh@287 -- # '[' rdma = rdma ']' 00:07:03.322 09:55:48 -- spdk/autotest.sh@288 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:03.322 09:55:48 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.322 09:55:48 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.322 09:55:48 -- common/autotest_common.sh@10 -- # set +x 00:07:03.322 ************************************ 00:07:03.322 START TEST nvmf_rdma 00:07:03.322 ************************************ 00:07:03.322 09:55:48 nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:07:03.581 * Looking for test storage... 00:07:03.581 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:03.581 09:55:48 nvmf_rdma -- nvmf/nvmf.sh@10 -- # uname -s 00:07:03.581 09:55:48 nvmf_rdma -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.581 09:55:48 nvmf_rdma -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:03.581 09:55:48 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.582 09:55:48 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.582 09:55:48 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:07:03.582 ************************************ 00:07:03.582 START TEST nvmf_target_core 00:07:03.582 ************************************ 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=rdma 00:07:03.582 * Looking for test storage... 00:07:03.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:03.582 ************************************ 00:07:03.582 START TEST nvmf_abort 00:07:03.582 ************************************ 00:07:03.582 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:07:03.842 * Looking for test storage... 00:07:03.842 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:03.842 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:03.843 09:55:48 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:10.415 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:10.415 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:10.415 Found net devices under 0000:da:00.0: mlx_0_0 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.415 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:10.416 Found net devices under 0000:da:00.1: mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@420 -- # rdma_device_init 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # uname 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:10.416 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:10.416 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:10.416 altname enp218s0f0np0 00:07:10.416 altname ens818f0np0 00:07:10.416 inet 192.168.100.8/24 scope global mlx_0_0 00:07:10.416 valid_lft forever preferred_lft forever 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:10.416 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:10.416 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:10.416 altname enp218s0f1np1 00:07:10.416 altname ens818f1np1 00:07:10.416 inet 192.168.100.9/24 scope global mlx_0_1 00:07:10.416 valid_lft forever preferred_lft forever 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # continue 2 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:10.416 192.168.100.9' 00:07:10.416 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:10.416 192.168.100.9' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # head -n 1 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:10.417 192.168.100.9' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # tail -n +2 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # head -n 1 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2404786 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2404786 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2404786 ']' 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:10.417 09:55:54 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 [2024-07-25 09:55:54.599079] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:10.417 [2024-07-25 09:55:54.599135] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:10.417 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.417 [2024-07-25 09:55:54.668362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.417 [2024-07-25 09:55:54.742693] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:10.417 [2024-07-25 09:55:54.742736] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:10.417 [2024-07-25 09:55:54.742743] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:10.417 [2024-07-25 09:55:54.742748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:10.417 [2024-07-25 09:55:54.742753] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:10.417 [2024-07-25 09:55:54.742873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.417 [2024-07-25 09:55:54.742958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.417 [2024-07-25 09:55:54.742957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.417 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.417 [2024-07-25 09:55:55.474621] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb9d200/0xba16f0) succeed. 00:07:10.417 [2024-07-25 09:55:55.492702] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb9e7a0/0xbe2d80) succeed. 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 Malloc0 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 Delay0 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 [2024-07-25 09:55:55.655480] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:10.676 09:55:55 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:10.676 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.676 [2024-07-25 09:55:55.768011] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:13.258 Initializing NVMe Controllers 00:07:13.258 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:07:13.258 controller IO queue size 128 less than required 00:07:13.258 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:13.258 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:13.258 Initialization complete. Launching workers. 00:07:13.258 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51400 00:07:13.258 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51461, failed to submit 62 00:07:13.258 success 51401, unsuccess 60, failed 0 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:13.258 rmmod nvme_rdma 00:07:13.258 rmmod nvme_fabrics 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2404786 ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2404786 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2404786 ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2404786 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2404786 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2404786' 00:07:13.258 killing process with pid 2404786 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2404786 00:07:13.258 09:55:57 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2404786 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:07:13.258 00:07:13.258 real 0m9.522s 00:07:13.258 user 0m14.274s 00:07:13.258 sys 0m4.708s 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:13.258 ************************************ 00:07:13.258 END TEST nvmf_abort 00:07:13.258 ************************************ 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.258 ************************************ 00:07:13.258 START TEST nvmf_ns_hotplug_stress 00:07:13.258 ************************************ 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:07:13.258 * Looking for test storage... 00:07:13.258 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:13.258 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:13.518 09:55:58 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:18.822 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:07:18.823 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:07:18.823 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:07:18.823 Found net devices under 0000:da:00.0: mlx_0_0 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:07:18.823 Found net devices under 0000:da:00.1: mlx_0_1 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:07:18.823 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # uname 00:07:19.083 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:07:19.083 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:07:19.083 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:07:19.083 09:56:03 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:07:19.083 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:19.083 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:07:19.083 altname enp218s0f0np0 00:07:19.083 altname ens818f0np0 00:07:19.083 inet 192.168.100.8/24 scope global mlx_0_0 00:07:19.083 valid_lft forever preferred_lft forever 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:07:19.083 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:07:19.083 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:07:19.083 altname enp218s0f1np1 00:07:19.083 altname ens818f1np1 00:07:19.083 inet 192.168.100.9/24 scope global mlx_0_1 00:07:19.083 valid_lft forever preferred_lft forever 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # continue 2 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:07:19.083 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:07:19.083 192.168.100.9' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:07:19.084 192.168.100.9' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # head -n 1 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:07:19.084 192.168.100.9' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # tail -n +2 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # head -n 1 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2408564 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2408564 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2408564 ']' 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.084 09:56:04 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:19.084 [2024-07-25 09:56:04.231345] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:19.084 [2024-07-25 09:56:04.231387] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:19.343 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.343 [2024-07-25 09:56:04.297038] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:19.343 [2024-07-25 09:56:04.370612] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:19.343 [2024-07-25 09:56:04.370653] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:19.343 [2024-07-25 09:56:04.370659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:19.343 [2024-07-25 09:56:04.370665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:19.343 [2024-07-25 09:56:04.370669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:19.343 [2024-07-25 09:56:04.370786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.343 [2024-07-25 09:56:04.370892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.343 [2024-07-25 09:56:04.370893] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.911 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.911 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:19.911 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:19.911 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:19.911 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:20.170 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.170 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:20.170 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:07:20.170 [2024-07-25 09:56:05.247683] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1eb2200/0x1eb66f0) succeed. 00:07:20.170 [2024-07-25 09:56:05.256674] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1eb37a0/0x1ef7d80) succeed. 00:07:20.429 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.429 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:07:20.688 [2024-07-25 09:56:05.707076] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:07:20.688 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:07:20.948 09:56:05 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:20.948 Malloc0 00:07:20.948 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:21.207 Delay0 00:07:21.207 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.465 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:21.465 NULL1 00:07:21.724 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:21.724 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:21.724 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2409048 00:07:21.724 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:21.724 09:56:06 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.724 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.101 Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 09:56:07 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:23.101 09:56:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:23.101 09:56:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:23.360 true 00:07:23.360 09:56:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:23.360 09:56:08 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 09:56:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:24.296 09:56:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:24.296 09:56:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:24.555 true 00:07:24.555 09:56:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:24.555 09:56:09 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 09:56:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:25.491 09:56:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:25.491 09:56:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:25.749 true 00:07:25.749 09:56:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:25.749 09:56:10 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 09:56:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:26.686 09:56:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:26.686 09:56:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:26.945 true 00:07:26.945 09:56:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:26.945 09:56:11 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 09:56:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:27.882 09:56:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:27.882 09:56:12 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:28.140 true 00:07:28.140 09:56:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:28.140 09:56:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.090 09:56:13 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.090 09:56:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:29.090 09:56:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:29.348 true 00:07:29.348 09:56:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:29.348 09:56:14 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 09:56:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.284 09:56:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:30.284 09:56:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:30.543 true 00:07:30.543 09:56:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:30.543 09:56:15 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 09:56:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:31.479 09:56:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:31.479 09:56:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:31.738 true 00:07:31.738 09:56:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:31.738 09:56:16 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 09:56:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.732 09:56:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:32.732 09:56:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:32.732 true 00:07:32.992 09:56:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:32.992 09:56:17 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.928 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.928 09:56:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.929 09:56:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:33.929 09:56:18 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:33.929 true 00:07:34.188 09:56:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:34.188 09:56:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 09:56:19 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.123 09:56:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:35.123 09:56:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:35.123 true 00:07:35.382 09:56:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:35.382 09:56:20 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:35.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:36.208 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.208 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:36.208 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:36.467 true 00:07:36.467 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:36.467 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:36.725 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:36.725 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:36.725 09:56:21 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:36.984 true 00:07:36.984 09:56:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:36.984 09:56:22 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 09:56:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.362 09:56:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:38.362 09:56:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:38.362 true 00:07:38.362 09:56:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:38.362 09:56:23 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.299 09:56:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.558 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.558 09:56:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:39.558 09:56:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:39.558 true 00:07:39.558 09:56:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:39.558 09:56:24 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.495 09:56:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.753 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.753 09:56:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:40.753 09:56:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:40.753 true 00:07:40.753 09:56:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:40.753 09:56:25 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.690 09:56:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.690 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.949 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:41.949 09:56:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:41.949 09:56:26 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:41.949 true 00:07:41.949 09:56:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:41.949 09:56:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:42.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.885 09:56:27 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:42.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:42.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.144 09:56:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:43.144 09:56:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:43.144 true 00:07:43.402 09:56:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:43.402 09:56:28 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.969 09:56:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.969 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.228 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.228 09:56:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:44.228 09:56:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:44.486 true 00:07:44.486 09:56:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:44.486 09:56:29 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 09:56:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.423 09:56:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:45.423 09:56:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:45.681 true 00:07:45.681 09:56:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:45.681 09:56:30 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 09:56:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.614 09:56:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:46.614 09:56:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:46.872 true 00:07:46.872 09:56:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:46.872 09:56:31 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 09:56:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:47.805 09:56:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:47.805 09:56:32 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:48.063 true 00:07:48.063 09:56:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:48.063 09:56:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 09:56:33 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.037 09:56:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:49.037 09:56:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:49.295 true 00:07:49.295 09:56:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:49.295 09:56:34 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 09:56:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.230 09:56:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:50.230 09:56:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:50.488 true 00:07:50.488 09:56:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:50.488 09:56:35 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.422 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.422 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:51.422 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:51.681 true 00:07:51.681 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:51.681 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.939 09:56:36 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.939 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:51.939 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:52.197 true 00:07:52.197 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:52.197 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.455 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.455 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:52.455 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:52.713 true 00:07:52.713 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:52.713 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.971 09:56:37 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.230 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:53.230 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:53.230 true 00:07:53.230 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:53.230 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.488 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:53.747 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:53.747 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:53.747 true 00:07:53.747 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:53.747 09:56:38 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.005 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.005 Initializing NVMe Controllers 00:07:54.005 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:07:54.005 Controller IO queue size 128, less than required. 00:07:54.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.005 Controller IO queue size 128, less than required. 00:07:54.005 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:54.005 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:54.005 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:54.005 Initialization complete. Launching workers. 00:07:54.005 ======================================================== 00:07:54.005 Latency(us) 00:07:54.005 Device Information : IOPS MiB/s Average min max 00:07:54.005 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5289.25 2.58 21363.22 768.06 1138115.24 00:07:54.005 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 34179.70 16.69 3744.86 1589.89 294540.31 00:07:54.005 ======================================================== 00:07:54.005 Total : 39468.94 19.27 6105.91 768.06 1138115.24 00:07:54.005 00:07:54.264 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:54.264 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:54.264 true 00:07:54.264 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2409048 00:07:54.264 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2409048) - No such process 00:07:54.264 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2409048 00:07:54.264 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.523 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:54.782 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:54.782 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:54.782 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:54.782 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:54.782 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:55.040 null0 00:07:55.040 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.040 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.040 09:56:39 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:55.040 null1 00:07:55.040 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.040 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.040 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:55.299 null2 00:07:55.299 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.299 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.299 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:55.557 null3 00:07:55.557 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.557 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.557 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:55.557 null4 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:55.815 null5 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:55.815 09:56:40 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:56.074 null6 00:07:56.074 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.074 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.074 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:56.333 null7 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2414887 2414888 2414890 2414892 2414895 2414896 2414898 2414899 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.333 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.334 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.334 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.334 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.334 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.334 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:56.593 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:56.852 09:56:41 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.111 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.112 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.371 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:57.630 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:57.890 09:56:42 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.149 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.412 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:58.669 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:58.670 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.670 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:58.670 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:43 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:58.928 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.187 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.446 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.705 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:59.706 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:59.965 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:59.965 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:59.965 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:59.965 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:59.965 09:56:44 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:07:59.965 rmmod nvme_rdma 00:07:59.965 rmmod nvme_fabrics 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2408564 ']' 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2408564 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2408564 ']' 00:07:59.965 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2408564 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2408564 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2408564' 00:08:00.224 killing process with pid 2408564 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2408564 00:08:00.224 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2408564 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:00.484 00:08:00.484 real 0m47.114s 00:08:00.484 user 3m17.490s 00:08:00.484 sys 0m11.930s 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.484 ************************************ 00:08:00.484 END TEST nvmf_ns_hotplug_stress 00:08:00.484 ************************************ 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:00.484 ************************************ 00:08:00.484 START TEST nvmf_delete_subsystem 00:08:00.484 ************************************ 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:08:00.484 * Looking for test storage... 00:08:00.484 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:00.484 09:56:45 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:07.098 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:07.098 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:07.098 Found net devices under 0000:da:00.0: mlx_0_0 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:07.098 Found net devices under 0000:da:00.1: mlx_0_1 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:07.098 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@420 -- # rdma_device_init 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # uname 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:07.099 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.099 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:07.099 altname enp218s0f0np0 00:08:07.099 altname ens818f0np0 00:08:07.099 inet 192.168.100.8/24 scope global mlx_0_0 00:08:07.099 valid_lft forever preferred_lft forever 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:07.099 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:07.099 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:07.099 altname enp218s0f1np1 00:08:07.099 altname ens818f1np1 00:08:07.099 inet 192.168.100.9/24 scope global mlx_0_1 00:08:07.099 valid_lft forever preferred_lft forever 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # continue 2 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:07.099 192.168.100.9' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:07.099 192.168.100.9' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # head -n 1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:07.099 192.168.100.9' 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # tail -n +2 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # head -n 1 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:07.099 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2418928 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2418928 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2418928 ']' 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:07.100 09:56:51 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 [2024-07-25 09:56:51.363340] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:07.100 [2024-07-25 09:56:51.363394] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.100 [2024-07-25 09:56:51.419495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:07.100 [2024-07-25 09:56:51.503069] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.100 [2024-07-25 09:56:51.503106] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.100 [2024-07-25 09:56:51.503114] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.100 [2024-07-25 09:56:51.503120] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.100 [2024-07-25 09:56:51.503125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.100 [2024-07-25 09:56:51.503182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.100 [2024-07-25 09:56:51.503183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.100 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.100 [2024-07-25 09:56:52.238335] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x24d03c0/0x24d48b0) succeed. 00:08:07.100 [2024-07-25 09:56:52.247006] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x24d1870/0x2515f40) succeed. 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.359 [2024-07-25 09:56:52.339541] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.359 NULL1 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.359 Delay0 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2419054 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:07.359 09:56:52 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:07.359 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.359 [2024-07-25 09:56:52.455579] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:09.261 09:56:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:09.261 09:56:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:09.261 09:56:54 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 NVMe io qpair process completion error 00:08:10.635 09:56:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.635 09:56:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:10.635 09:56:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419054 00:08:10.635 09:56:55 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:10.894 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:10.894 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419054 00:08:10.894 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 starting I/O failed: -6 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 Write completed with error (sct=0, sc=8) 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.462 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 starting I/O failed: -6 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Write completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.463 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Write completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Read completed with error (sct=0, sc=8) 00:08:11.464 Initializing NVMe Controllers 00:08:11.464 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.464 Controller IO queue size 128, less than required. 00:08:11.464 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:11.464 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:11.464 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:11.464 Initialization complete. Launching workers. 00:08:11.464 ======================================================== 00:08:11.464 Latency(us) 00:08:11.464 Device Information : IOPS MiB/s Average min max 00:08:11.464 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.54 0.04 1592792.72 1000123.85 2972764.13 00:08:11.464 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.54 0.04 1594187.77 1000493.19 2974124.36 00:08:11.464 ======================================================== 00:08:11.464 Total : 161.08 0.08 1593490.25 1000123.85 2974124.36 00:08:11.464 00:08:11.464 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:11.464 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419054 00:08:11.464 09:56:56 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:11.464 [2024-07-25 09:56:56.557993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:08:11.464 [2024-07-25 09:56:56.558029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:08:11.464 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2419054 00:08:12.031 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2419054) - No such process 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2419054 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2419054 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2419054 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.031 [2024-07-25 09:56:57.077404] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:12.031 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2419977 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:12.032 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:12.032 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.032 [2024-07-25 09:56:57.178882] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:12.597 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:12.597 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:12.597 09:56:57 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.164 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.164 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:13.164 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.731 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.731 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:13.731 09:56:58 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:13.989 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:13.989 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:13.989 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:14.556 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:14.556 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:14.556 09:56:59 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.122 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.122 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:15.122 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:15.687 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:15.687 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:15.687 09:57:00 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.251 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.251 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:16.251 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.509 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:16.509 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:16.509 09:57:01 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.076 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.076 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:17.076 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.642 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.642 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:17.642 09:57:02 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.209 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.209 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:18.209 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.774 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.774 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:18.774 09:57:03 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.031 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.031 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:19.031 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.289 Initializing NVMe Controllers 00:08:19.289 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.289 Controller IO queue size 128, less than required. 00:08:19.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.289 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:19.289 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:19.289 Initialization complete. Launching workers. 00:08:19.289 ======================================================== 00:08:19.289 Latency(us) 00:08:19.289 Device Information : IOPS MiB/s Average min max 00:08:19.289 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001389.37 1000059.20 1004331.77 00:08:19.289 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002782.78 1000741.90 1005810.10 00:08:19.289 ======================================================== 00:08:19.289 Total : 256.00 0.12 1002086.08 1000059.20 1005810.10 00:08:19.289 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2419977 00:08:19.547 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2419977) - No such process 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2419977 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:19.547 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:19.548 rmmod nvme_rdma 00:08:19.548 rmmod nvme_fabrics 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2418928 ']' 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2418928 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2418928 ']' 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2418928 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:19.548 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2418928 00:08:19.806 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:19.806 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:19.806 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2418928' 00:08:19.806 killing process with pid 2418928 00:08:19.806 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2418928 00:08:19.806 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2418928 00:08:20.065 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.065 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:20.065 00:08:20.065 real 0m19.478s 00:08:20.065 user 0m49.990s 00:08:20.065 sys 0m5.305s 00:08:20.065 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.065 09:57:04 nvmf_rdma.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:20.065 ************************************ 00:08:20.065 END TEST nvmf_delete_subsystem 00:08:20.065 ************************************ 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:20.065 ************************************ 00:08:20.065 START TEST nvmf_host_management 00:08:20.065 ************************************ 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:08:20.065 * Looking for test storage... 00:08:20.065 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.065 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.066 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:20.066 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:20.066 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:20.066 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:20.066 09:57:05 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:26.684 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:26.685 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:26.685 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:26.685 Found net devices under 0000:da:00.0: mlx_0_0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:26.685 Found net devices under 0000:da:00.1: mlx_0_1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@420 -- # rdma_device_init 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # uname 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:26.685 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.685 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:26.685 altname enp218s0f0np0 00:08:26.685 altname ens818f0np0 00:08:26.685 inet 192.168.100.8/24 scope global mlx_0_0 00:08:26.685 valid_lft forever preferred_lft forever 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:26.685 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:26.685 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:26.685 altname enp218s0f1np1 00:08:26.685 altname ens818f1np1 00:08:26.685 inet 192.168.100.9/24 scope global mlx_0_1 00:08:26.685 valid_lft forever preferred_lft forever 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:26.685 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # continue 2 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:26.686 192.168.100.9' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:26.686 192.168.100.9' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # head -n 1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:26.686 192.168.100.9' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # tail -n +2 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # head -n 1 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2424934 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2424934 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2424934 ']' 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.686 09:57:10 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 [2024-07-25 09:57:10.950337] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:26.686 [2024-07-25 09:57:10.950391] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.686 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.686 [2024-07-25 09:57:11.018101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:26.686 [2024-07-25 09:57:11.092834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.686 [2024-07-25 09:57:11.092875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.686 [2024-07-25 09:57:11.092881] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.686 [2024-07-25 09:57:11.092889] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.686 [2024-07-25 09:57:11.092893] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.686 [2024-07-25 09:57:11.093011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:26.686 [2024-07-25 09:57:11.093116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:26.686 [2024-07-25 09:57:11.093204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.686 [2024-07-25 09:57:11.093204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.686 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.686 [2024-07-25 09:57:11.816471] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x22dbe10/0x22e0300) succeed. 00:08:26.686 [2024-07-25 09:57:11.825572] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x22dd400/0x2321990) succeed. 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.946 09:57:11 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.946 Malloc0 00:08:26.946 [2024-07-25 09:57:12.001371] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2425102 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2425102 /var/tmp/bdevperf.sock 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2425102 ']' 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:26.946 { 00:08:26.946 "params": { 00:08:26.946 "name": "Nvme$subsystem", 00:08:26.946 "trtype": "$TEST_TRANSPORT", 00:08:26.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:26.946 "adrfam": "ipv4", 00:08:26.946 "trsvcid": "$NVMF_PORT", 00:08:26.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:26.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:26.946 "hdgst": ${hdgst:-false}, 00:08:26.946 "ddgst": ${ddgst:-false} 00:08:26.946 }, 00:08:26.946 "method": "bdev_nvme_attach_controller" 00:08:26.946 } 00:08:26.946 EOF 00:08:26.946 )") 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:26.946 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:26.946 "params": { 00:08:26.946 "name": "Nvme0", 00:08:26.946 "trtype": "rdma", 00:08:26.946 "traddr": "192.168.100.8", 00:08:26.946 "adrfam": "ipv4", 00:08:26.946 "trsvcid": "4420", 00:08:26.946 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:26.946 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:26.946 "hdgst": false, 00:08:26.946 "ddgst": false 00:08:26.946 }, 00:08:26.946 "method": "bdev_nvme_attach_controller" 00:08:26.946 }' 00:08:26.946 [2024-07-25 09:57:12.092442] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:26.946 [2024-07-25 09:57:12.092490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425102 ] 00:08:27.205 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.206 [2024-07-25 09:57:12.161460] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.206 [2024-07-25 09:57:12.234796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.464 Running I/O for 10 seconds... 00:08:28.029 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:28.029 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:28.029 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1580 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1580 -ge 100 ']' 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.030 09:57:12 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.030 09:57:13 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:28.966 [2024-07-25 09:57:14.010971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:90112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:90240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001389a700 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:90880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:91008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001385a500 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001382a380 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001380a280 len:0x10000 key:0x182500 00:08:28.966 [2024-07-25 09:57:14.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192c1e00 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:92288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:92544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:92672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019261b00 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:92928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019251a80 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019221900 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182800 00:08:28.966 [2024-07-25 09:57:14.011442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:93696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:93952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190bfe00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:94336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:94720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001906fb80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:95104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182700 00:08:28.966 [2024-07-25 09:57:14.011633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000daee000 len:0x10000 key:0x182400 00:08:28.966 [2024-07-25 09:57:14.011648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x182400 00:08:28.966 [2024-07-25 09:57:14.011662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.966 [2024-07-25 09:57:14.011670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x182400 00:08:28.966 [2024-07-25 09:57:14.011676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c85e000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c87f000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df2f000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df0e000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deed000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000decc000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000deab000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de8a000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de69000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000de06000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:89344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dde5000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ddc4000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dda3000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd82000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd61000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.011962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dd40000 len:0x10000 key:0x182400 00:08:28.967 [2024-07-25 09:57:14.011968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:db34d000 sqhd:52b0 p:0 m:0 dnr:0 00:08:28.967 [2024-07-25 09:57:14.013858] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019201580 was disconnected and freed. reset controller. 00:08:28.967 [2024-07-25 09:57:14.014764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:28.967 task offset: 90112 on job bdev=Nvme0n1 fails 00:08:28.967 00:08:28.967 Latency(us) 00:08:28.967 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.967 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:28.967 Job: Nvme0n1 ended in about 1.60 seconds with error 00:08:28.967 Verification LBA range: start 0x0 length 0x400 00:08:28.967 Nvme0n1 : 1.60 1066.93 66.68 40.05 0.00 57288.78 2231.34 1030600.41 00:08:28.967 =================================================================================================================== 00:08:28.967 Total : 1066.93 66.68 40.05 0.00 57288.78 2231.34 1030600.41 00:08:28.967 [2024-07-25 09:57:14.016337] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2425102 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:28.967 { 00:08:28.967 "params": { 00:08:28.967 "name": "Nvme$subsystem", 00:08:28.967 "trtype": "$TEST_TRANSPORT", 00:08:28.967 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:28.967 "adrfam": "ipv4", 00:08:28.967 "trsvcid": "$NVMF_PORT", 00:08:28.967 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:28.967 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:28.967 "hdgst": ${hdgst:-false}, 00:08:28.967 "ddgst": ${ddgst:-false} 00:08:28.967 }, 00:08:28.967 "method": "bdev_nvme_attach_controller" 00:08:28.967 } 00:08:28.967 EOF 00:08:28.967 )") 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:28.967 09:57:14 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:28.967 "params": { 00:08:28.967 "name": "Nvme0", 00:08:28.967 "trtype": "rdma", 00:08:28.967 "traddr": "192.168.100.8", 00:08:28.967 "adrfam": "ipv4", 00:08:28.967 "trsvcid": "4420", 00:08:28.967 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:28.967 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:28.967 "hdgst": false, 00:08:28.967 "ddgst": false 00:08:28.967 }, 00:08:28.967 "method": "bdev_nvme_attach_controller" 00:08:28.967 }' 00:08:28.967 [2024-07-25 09:57:14.062569] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:28.967 [2024-07-25 09:57:14.062611] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2425459 ] 00:08:28.967 EAL: No free 2048 kB hugepages reported on node 1 00:08:29.226 [2024-07-25 09:57:14.130529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.226 [2024-07-25 09:57:14.203120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.226 Running I/O for 1 seconds... 00:08:30.603 00:08:30.603 Latency(us) 00:08:30.603 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.603 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:30.603 Verification LBA range: start 0x0 length 0x400 00:08:30.603 Nvme0n1 : 1.01 3013.61 188.35 0.00 0.00 20804.29 651.46 42941.68 00:08:30.603 =================================================================================================================== 00:08:30.603 Total : 3013.61 188.35 0.00 0.00 20804.29 651.46 42941.68 00:08:30.603 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 2425102 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:30.603 rmmod nvme_rdma 00:08:30.603 rmmod nvme_fabrics 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2424934 ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2424934 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2424934 ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2424934 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2424934 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2424934' 00:08:30.603 killing process with pid 2424934 00:08:30.603 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2424934 00:08:30.604 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2424934 00:08:30.863 [2024-07-25 09:57:15.946894] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:30.863 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:30.863 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:30.863 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:30.863 00:08:30.863 real 0m10.923s 00:08:30.863 user 0m24.502s 00:08:30.863 sys 0m5.173s 00:08:30.863 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.863 09:57:15 nvmf_rdma.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:30.863 ************************************ 00:08:30.863 END TEST nvmf_host_management 00:08:30.863 ************************************ 00:08:30.863 09:57:16 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:30.863 09:57:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:30.863 09:57:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.863 09:57:16 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:31.123 ************************************ 00:08:31.123 START TEST nvmf_lvol 00:08:31.123 ************************************ 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:08:31.123 * Looking for test storage... 00:08:31.123 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:31.123 09:57:16 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:37.692 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:37.692 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:37.692 Found net devices under 0000:da:00.0: mlx_0_0 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:37.692 Found net devices under 0000:da:00.1: mlx_0_1 00:08:37.692 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@420 -- # rdma_device_init 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # uname 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:37.693 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.693 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:37.693 altname enp218s0f0np0 00:08:37.693 altname ens818f0np0 00:08:37.693 inet 192.168.100.8/24 scope global mlx_0_0 00:08:37.693 valid_lft forever preferred_lft forever 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:37.693 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:37.693 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:37.693 altname enp218s0f1np1 00:08:37.693 altname ens818f1np1 00:08:37.693 inet 192.168.100.9/24 scope global mlx_0_1 00:08:37.693 valid_lft forever preferred_lft forever 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # continue 2 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:37.693 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:37.694 192.168.100.9' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:37.694 192.168.100.9' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # head -n 1 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:37.694 192.168.100.9' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # tail -n +2 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # head -n 1 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2428890 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2428890 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2428890 ']' 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:37.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:37.694 09:57:21 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.694 [2024-07-25 09:57:21.926791] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:37.694 [2024-07-25 09:57:21.926841] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:37.694 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.694 [2024-07-25 09:57:21.993403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:37.694 [2024-07-25 09:57:22.073198] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:37.694 [2024-07-25 09:57:22.073236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:37.694 [2024-07-25 09:57:22.073242] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:37.694 [2024-07-25 09:57:22.073248] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:37.694 [2024-07-25 09:57:22.073254] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:37.694 [2024-07-25 09:57:22.073302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.694 [2024-07-25 09:57:22.073338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:37.694 [2024-07-25 09:57:22.073336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.694 09:57:22 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:37.953 [2024-07-25 09:57:22.944928] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x23bdf00/0x23c23f0) succeed. 00:08:37.953 [2024-07-25 09:57:22.953790] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x23bf4a0/0x2403a80) succeed. 00:08:37.953 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.259 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:38.259 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:38.518 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:38.518 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:38.518 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:38.776 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=639f34ce-2e33-40c0-b681-a2550fd74d52 00:08:38.776 09:57:23 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 639f34ce-2e33-40c0-b681-a2550fd74d52 lvol 20 00:08:39.035 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=82c9838b-e156-498a-829d-5c859a1f7088 00:08:39.035 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:39.293 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 82c9838b-e156-498a-829d-5c859a1f7088 00:08:39.293 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:08:39.552 [2024-07-25 09:57:24.546083] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:39.552 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:08:39.811 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:39.811 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2429464 00:08:39.811 09:57:24 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:39.811 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.746 09:57:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 82c9838b-e156-498a-829d-5c859a1f7088 MY_SNAPSHOT 00:08:41.005 09:57:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=84df175e-3115-487c-bd11-e9498267d519 00:08:41.005 09:57:25 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 82c9838b-e156-498a-829d-5c859a1f7088 30 00:08:41.005 09:57:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 84df175e-3115-487c-bd11-e9498267d519 MY_CLONE 00:08:41.264 09:57:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=300e0706-9655-467f-b096-5b1eb044dd96 00:08:41.264 09:57:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 300e0706-9655-467f-b096-5b1eb044dd96 00:08:41.523 09:57:26 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2429464 00:08:51.518 Initializing NVMe Controllers 00:08:51.518 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:08:51.518 Controller IO queue size 128, less than required. 00:08:51.518 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.518 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:51.518 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:51.518 Initialization complete. Launching workers. 00:08:51.518 ======================================================== 00:08:51.518 Latency(us) 00:08:51.518 Device Information : IOPS MiB/s Average min max 00:08:51.518 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16538.30 64.60 7741.82 2066.76 38927.39 00:08:51.518 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16682.90 65.17 7674.13 3350.12 47229.90 00:08:51.518 ======================================================== 00:08:51.518 Total : 33221.20 129.77 7707.83 2066.76 47229.90 00:08:51.518 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 82c9838b-e156-498a-829d-5c859a1f7088 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 639f34ce-2e33-40c0-b681-a2550fd74d52 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:08:51.518 rmmod nvme_rdma 00:08:51.518 rmmod nvme_fabrics 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2428890 ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2428890 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2428890 ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2428890 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2428890 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2428890' 00:08:51.518 killing process with pid 2428890 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2428890 00:08:51.518 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2428890 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:08:52.087 00:08:52.087 real 0m20.916s 00:08:52.087 user 1m10.789s 00:08:52.087 sys 0m5.359s 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:52.087 ************************************ 00:08:52.087 END TEST nvmf_lvol 00:08:52.087 ************************************ 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:52.087 09:57:36 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:52.087 ************************************ 00:08:52.087 START TEST nvmf_lvs_grow 00:08:52.087 ************************************ 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:08:52.087 * Looking for test storage... 00:08:52.087 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:52.087 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:52.088 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:52.088 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:52.088 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:52.088 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:52.088 09:57:37 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:58.659 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:08:58.660 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:08:58.660 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:08:58.660 Found net devices under 0000:da:00.0: mlx_0_0 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:08:58.660 Found net devices under 0000:da:00.1: mlx_0_1 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@420 -- # rdma_device_init 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # uname 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@62 -- # modprobe ib_cm 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@63 -- # modprobe ib_core 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@64 -- # modprobe ib_umad 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@66 -- # modprobe iw_cm 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # allocate_nic_ips 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # get_rdma_if_list 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.660 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:08:58.661 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.661 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:08:58.661 altname enp218s0f0np0 00:08:58.661 altname ens818f0np0 00:08:58.661 inet 192.168.100.8/24 scope global mlx_0_0 00:08:58.661 valid_lft forever preferred_lft forever 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:08:58.661 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:58.661 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:08:58.661 altname enp218s0f1np1 00:08:58.661 altname ens818f1np1 00:08:58.661 inet 192.168.100.9/24 scope global mlx_0_1 00:08:58.661 valid_lft forever preferred_lft forever 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # get_rdma_if_list 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # echo mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # continue 2 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # awk '{print $4}' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@113 -- # cut -d/ -f1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:08:58.661 192.168.100.9' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:08:58.661 192.168.100.9' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # head -n 1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:08:58.661 192.168.100.9' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # tail -n +2 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # head -n 1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2434620 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2434620 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2434620 ']' 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.661 09:57:42 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:58.661 [2024-07-25 09:57:42.921050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:58.661 [2024-07-25 09:57:42.921101] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:58.661 EAL: No free 2048 kB hugepages reported on node 1 00:08:58.661 [2024-07-25 09:57:42.988471] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:58.661 [2024-07-25 09:57:43.066539] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:58.661 [2024-07-25 09:57:43.066573] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:58.662 [2024-07-25 09:57:43.066580] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:58.662 [2024-07-25 09:57:43.066586] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:58.662 [2024-07-25 09:57:43.066591] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:58.662 [2024-07-25 09:57:43.066608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.662 09:57:43 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:58.922 [2024-07-25 09:57:43.926999] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e80910/0x1e84e00) succeed. 00:08:58.922 [2024-07-25 09:57:43.936773] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e81e10/0x1ec6490) succeed. 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:58.922 ************************************ 00:08:58.922 START TEST lvs_grow_clean 00:08:58.922 ************************************ 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.922 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:59.181 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:59.181 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b0150f21-4670-4c97-b1be-48501beb55c5 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:59.440 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b0150f21-4670-4c97-b1be-48501beb55c5 lvol 150 00:08:59.699 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=36696fde-bb6c-4a3a-8d98-5c7d6a39a40d 00:08:59.699 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:59.699 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:59.957 [2024-07-25 09:57:44.939964] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:59.957 [2024-07-25 09:57:44.940010] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:59.957 true 00:08:59.957 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:08:59.957 09:57:44 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:00.215 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:00.215 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:00.215 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 36696fde-bb6c-4a3a-8d98-5c7d6a39a40d 00:09:00.473 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:00.473 [2024-07-25 09:57:45.586132] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:00.473 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2435126 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2435126 /var/tmp/bdevperf.sock 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2435126 ']' 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.732 09:57:45 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:00.732 [2024-07-25 09:57:45.809440] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:00.732 [2024-07-25 09:57:45.809486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2435126 ] 00:09:00.732 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.732 [2024-07-25 09:57:45.875324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.991 [2024-07-25 09:57:45.947846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:01.558 09:57:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.558 09:57:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:01.558 09:57:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.816 Nvme0n1 00:09:01.816 09:57:46 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:02.074 [ 00:09:02.074 { 00:09:02.074 "name": "Nvme0n1", 00:09:02.074 "aliases": [ 00:09:02.074 "36696fde-bb6c-4a3a-8d98-5c7d6a39a40d" 00:09:02.074 ], 00:09:02.074 "product_name": "NVMe disk", 00:09:02.074 "block_size": 4096, 00:09:02.074 "num_blocks": 38912, 00:09:02.074 "uuid": "36696fde-bb6c-4a3a-8d98-5c7d6a39a40d", 00:09:02.074 "assigned_rate_limits": { 00:09:02.074 "rw_ios_per_sec": 0, 00:09:02.074 "rw_mbytes_per_sec": 0, 00:09:02.074 "r_mbytes_per_sec": 0, 00:09:02.074 "w_mbytes_per_sec": 0 00:09:02.074 }, 00:09:02.074 "claimed": false, 00:09:02.074 "zoned": false, 00:09:02.075 "supported_io_types": { 00:09:02.075 "read": true, 00:09:02.075 "write": true, 00:09:02.075 "unmap": true, 00:09:02.075 "flush": true, 00:09:02.075 "reset": true, 00:09:02.075 "nvme_admin": true, 00:09:02.075 "nvme_io": true, 00:09:02.075 "nvme_io_md": false, 00:09:02.075 "write_zeroes": true, 00:09:02.075 "zcopy": false, 00:09:02.075 "get_zone_info": false, 00:09:02.075 "zone_management": false, 00:09:02.075 "zone_append": false, 00:09:02.075 "compare": true, 00:09:02.075 "compare_and_write": true, 00:09:02.075 "abort": true, 00:09:02.075 "seek_hole": false, 00:09:02.075 "seek_data": false, 00:09:02.075 "copy": true, 00:09:02.075 "nvme_iov_md": false 00:09:02.075 }, 00:09:02.075 "memory_domains": [ 00:09:02.075 { 00:09:02.075 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:02.075 "dma_device_type": 0 00:09:02.075 } 00:09:02.075 ], 00:09:02.075 "driver_specific": { 00:09:02.075 "nvme": [ 00:09:02.075 { 00:09:02.075 "trid": { 00:09:02.075 "trtype": "RDMA", 00:09:02.075 "adrfam": "IPv4", 00:09:02.075 "traddr": "192.168.100.8", 00:09:02.075 "trsvcid": "4420", 00:09:02.075 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:02.075 }, 00:09:02.075 "ctrlr_data": { 00:09:02.075 "cntlid": 1, 00:09:02.075 "vendor_id": "0x8086", 00:09:02.075 "model_number": "SPDK bdev Controller", 00:09:02.075 "serial_number": "SPDK0", 00:09:02.075 "firmware_revision": "24.09", 00:09:02.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.075 "oacs": { 00:09:02.075 "security": 0, 00:09:02.075 "format": 0, 00:09:02.075 "firmware": 0, 00:09:02.075 "ns_manage": 0 00:09:02.075 }, 00:09:02.075 "multi_ctrlr": true, 00:09:02.075 "ana_reporting": false 00:09:02.075 }, 00:09:02.075 "vs": { 00:09:02.075 "nvme_version": "1.3" 00:09:02.075 }, 00:09:02.075 "ns_data": { 00:09:02.075 "id": 1, 00:09:02.075 "can_share": true 00:09:02.075 } 00:09:02.075 } 00:09:02.075 ], 00:09:02.075 "mp_policy": "active_passive" 00:09:02.075 } 00:09:02.075 } 00:09:02.075 ] 00:09:02.075 09:57:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2435359 00:09:02.075 09:57:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:02.075 09:57:47 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:02.075 Running I/O for 10 seconds... 00:09:03.011 Latency(us) 00:09:03.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:03.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.011 Nvme0n1 : 1.00 34561.00 135.00 0.00 0.00 0.00 0.00 0.00 00:09:03.011 =================================================================================================================== 00:09:03.011 Total : 34561.00 135.00 0.00 0.00 0.00 0.00 0.00 00:09:03.011 00:09:03.945 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:04.203 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.203 Nvme0n1 : 2.00 34881.00 136.25 0.00 0.00 0.00 0.00 0.00 00:09:04.203 =================================================================================================================== 00:09:04.203 Total : 34881.00 136.25 0.00 0.00 0.00 0.00 0.00 00:09:04.203 00:09:04.203 true 00:09:04.203 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:04.203 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.462 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.462 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.462 09:57:49 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2435359 00:09:05.029 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.029 Nvme0n1 : 3.00 34944.00 136.50 0.00 0.00 0.00 0.00 0.00 00:09:05.029 =================================================================================================================== 00:09:05.029 Total : 34944.00 136.50 0.00 0.00 0.00 0.00 0.00 00:09:05.029 00:09:06.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.401 Nvme0n1 : 4.00 35032.50 136.85 0.00 0.00 0.00 0.00 0.00 00:09:06.401 =================================================================================================================== 00:09:06.401 Total : 35032.50 136.85 0.00 0.00 0.00 0.00 0.00 00:09:06.401 00:09:07.334 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.334 Nvme0n1 : 5.00 35123.20 137.20 0.00 0.00 0.00 0.00 0.00 00:09:07.334 =================================================================================================================== 00:09:07.334 Total : 35123.20 137.20 0.00 0.00 0.00 0.00 0.00 00:09:07.334 00:09:08.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:08.269 Nvme0n1 : 6.00 35194.33 137.48 0.00 0.00 0.00 0.00 0.00 00:09:08.269 =================================================================================================================== 00:09:08.269 Total : 35194.33 137.48 0.00 0.00 0.00 0.00 0.00 00:09:08.269 00:09:09.210 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.210 Nvme0n1 : 7.00 35254.57 137.71 0.00 0.00 0.00 0.00 0.00 00:09:09.210 =================================================================================================================== 00:09:09.210 Total : 35254.57 137.71 0.00 0.00 0.00 0.00 0.00 00:09:09.210 00:09:10.158 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.158 Nvme0n1 : 8.00 35291.88 137.86 0.00 0.00 0.00 0.00 0.00 00:09:10.158 =================================================================================================================== 00:09:10.158 Total : 35291.88 137.86 0.00 0.00 0.00 0.00 0.00 00:09:10.158 00:09:11.121 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.121 Nvme0n1 : 9.00 35317.11 137.96 0.00 0.00 0.00 0.00 0.00 00:09:11.121 =================================================================================================================== 00:09:11.121 Total : 35317.11 137.96 0.00 0.00 0.00 0.00 0.00 00:09:11.121 00:09:12.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.058 Nvme0n1 : 10.00 35340.80 138.05 0.00 0.00 0.00 0.00 0.00 00:09:12.058 =================================================================================================================== 00:09:12.058 Total : 35340.80 138.05 0.00 0.00 0.00 0.00 0.00 00:09:12.058 00:09:12.058 00:09:12.058 Latency(us) 00:09:12.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.058 Nvme0n1 : 10.00 35341.85 138.05 0.00 0.00 3618.81 2371.78 13981.01 00:09:12.058 =================================================================================================================== 00:09:12.058 Total : 35341.85 138.05 0.00 0.00 3618.81 2371.78 13981.01 00:09:12.058 0 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2435126 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2435126 ']' 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2435126 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2435126 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2435126' 00:09:12.058 killing process with pid 2435126 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2435126 00:09:12.058 Received shutdown signal, test time was about 10.000000 seconds 00:09:12.058 00:09:12.058 Latency(us) 00:09:12.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.058 =================================================================================================================== 00:09:12.058 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:12.058 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2435126 00:09:12.317 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:12.629 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.888 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:12.888 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:12.888 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:12.888 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:12.888 09:57:57 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.147 [2024-07-25 09:57:58.147031] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.147 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:13.406 request: 00:09:13.406 { 00:09:13.406 "uuid": "b0150f21-4670-4c97-b1be-48501beb55c5", 00:09:13.406 "method": "bdev_lvol_get_lvstores", 00:09:13.406 "req_id": 1 00:09:13.406 } 00:09:13.406 Got JSON-RPC error response 00:09:13.406 response: 00:09:13.406 { 00:09:13.406 "code": -19, 00:09:13.406 "message": "No such device" 00:09:13.406 } 00:09:13.406 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:13.406 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.406 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.406 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:13.407 aio_bdev 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 36696fde-bb6c-4a3a-8d98-5c7d6a39a40d 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=36696fde-bb6c-4a3a-8d98-5c7d6a39a40d 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:13.407 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:13.666 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 36696fde-bb6c-4a3a-8d98-5c7d6a39a40d -t 2000 00:09:13.666 [ 00:09:13.666 { 00:09:13.666 "name": "36696fde-bb6c-4a3a-8d98-5c7d6a39a40d", 00:09:13.666 "aliases": [ 00:09:13.666 "lvs/lvol" 00:09:13.666 ], 00:09:13.666 "product_name": "Logical Volume", 00:09:13.666 "block_size": 4096, 00:09:13.666 "num_blocks": 38912, 00:09:13.666 "uuid": "36696fde-bb6c-4a3a-8d98-5c7d6a39a40d", 00:09:13.666 "assigned_rate_limits": { 00:09:13.666 "rw_ios_per_sec": 0, 00:09:13.666 "rw_mbytes_per_sec": 0, 00:09:13.666 "r_mbytes_per_sec": 0, 00:09:13.666 "w_mbytes_per_sec": 0 00:09:13.666 }, 00:09:13.666 "claimed": false, 00:09:13.666 "zoned": false, 00:09:13.666 "supported_io_types": { 00:09:13.666 "read": true, 00:09:13.666 "write": true, 00:09:13.666 "unmap": true, 00:09:13.666 "flush": false, 00:09:13.666 "reset": true, 00:09:13.666 "nvme_admin": false, 00:09:13.666 "nvme_io": false, 00:09:13.666 "nvme_io_md": false, 00:09:13.666 "write_zeroes": true, 00:09:13.666 "zcopy": false, 00:09:13.666 "get_zone_info": false, 00:09:13.666 "zone_management": false, 00:09:13.666 "zone_append": false, 00:09:13.666 "compare": false, 00:09:13.666 "compare_and_write": false, 00:09:13.666 "abort": false, 00:09:13.666 "seek_hole": true, 00:09:13.666 "seek_data": true, 00:09:13.666 "copy": false, 00:09:13.666 "nvme_iov_md": false 00:09:13.666 }, 00:09:13.666 "driver_specific": { 00:09:13.666 "lvol": { 00:09:13.666 "lvol_store_uuid": "b0150f21-4670-4c97-b1be-48501beb55c5", 00:09:13.666 "base_bdev": "aio_bdev", 00:09:13.666 "thin_provision": false, 00:09:13.666 "num_allocated_clusters": 38, 00:09:13.666 "snapshot": false, 00:09:13.666 "clone": false, 00:09:13.666 "esnap_clone": false 00:09:13.666 } 00:09:13.666 } 00:09:13.666 } 00:09:13.666 ] 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:13.924 09:57:58 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:14.184 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:14.184 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 36696fde-bb6c-4a3a-8d98-5c7d6a39a40d 00:09:14.184 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b0150f21-4670-4c97-b1be-48501beb55c5 00:09:14.443 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.701 00:09:14.701 real 0m15.653s 00:09:14.701 user 0m15.762s 00:09:14.701 sys 0m1.008s 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:14.701 ************************************ 00:09:14.701 END TEST lvs_grow_clean 00:09:14.701 ************************************ 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:14.701 ************************************ 00:09:14.701 START TEST lvs_grow_dirty 00:09:14.701 ************************************ 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:14.701 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.960 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:14.960 09:57:59 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:15.219 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1a10322-d8d2-4f52-b982-cd3433bc1370 lvol 150 00:09:15.478 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:15.478 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:15.478 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:15.478 [2024-07-25 09:58:00.626869] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:15.478 [2024-07-25 09:58:00.626918] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:15.478 true 00:09:15.737 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:15.737 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:15.737 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:15.737 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:15.996 09:58:00 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:15.996 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:09:16.254 [2024-07-25 09:58:01.272944] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:16.254 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2437731 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2437731 /var/tmp/bdevperf.sock 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2437731 ']' 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:16.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:16.513 09:58:01 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:16.513 [2024-07-25 09:58:01.492271] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:16.513 [2024-07-25 09:58:01.492319] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437731 ] 00:09:16.513 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.513 [2024-07-25 09:58:01.555831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.513 [2024-07-25 09:58:01.626887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.449 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.449 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:17.449 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:17.449 Nvme0n1 00:09:17.449 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:17.708 [ 00:09:17.708 { 00:09:17.708 "name": "Nvme0n1", 00:09:17.708 "aliases": [ 00:09:17.708 "e7114cf4-3704-47c8-a5f0-7388d9b059c0" 00:09:17.708 ], 00:09:17.708 "product_name": "NVMe disk", 00:09:17.708 "block_size": 4096, 00:09:17.708 "num_blocks": 38912, 00:09:17.708 "uuid": "e7114cf4-3704-47c8-a5f0-7388d9b059c0", 00:09:17.708 "assigned_rate_limits": { 00:09:17.708 "rw_ios_per_sec": 0, 00:09:17.708 "rw_mbytes_per_sec": 0, 00:09:17.708 "r_mbytes_per_sec": 0, 00:09:17.708 "w_mbytes_per_sec": 0 00:09:17.708 }, 00:09:17.708 "claimed": false, 00:09:17.708 "zoned": false, 00:09:17.708 "supported_io_types": { 00:09:17.708 "read": true, 00:09:17.708 "write": true, 00:09:17.708 "unmap": true, 00:09:17.708 "flush": true, 00:09:17.708 "reset": true, 00:09:17.708 "nvme_admin": true, 00:09:17.708 "nvme_io": true, 00:09:17.708 "nvme_io_md": false, 00:09:17.708 "write_zeroes": true, 00:09:17.708 "zcopy": false, 00:09:17.708 "get_zone_info": false, 00:09:17.708 "zone_management": false, 00:09:17.708 "zone_append": false, 00:09:17.708 "compare": true, 00:09:17.708 "compare_and_write": true, 00:09:17.708 "abort": true, 00:09:17.708 "seek_hole": false, 00:09:17.708 "seek_data": false, 00:09:17.708 "copy": true, 00:09:17.708 "nvme_iov_md": false 00:09:17.708 }, 00:09:17.708 "memory_domains": [ 00:09:17.708 { 00:09:17.708 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:09:17.708 "dma_device_type": 0 00:09:17.708 } 00:09:17.708 ], 00:09:17.708 "driver_specific": { 00:09:17.708 "nvme": [ 00:09:17.708 { 00:09:17.708 "trid": { 00:09:17.708 "trtype": "RDMA", 00:09:17.708 "adrfam": "IPv4", 00:09:17.708 "traddr": "192.168.100.8", 00:09:17.708 "trsvcid": "4420", 00:09:17.708 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:17.708 }, 00:09:17.708 "ctrlr_data": { 00:09:17.708 "cntlid": 1, 00:09:17.708 "vendor_id": "0x8086", 00:09:17.708 "model_number": "SPDK bdev Controller", 00:09:17.708 "serial_number": "SPDK0", 00:09:17.708 "firmware_revision": "24.09", 00:09:17.708 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:17.708 "oacs": { 00:09:17.708 "security": 0, 00:09:17.708 "format": 0, 00:09:17.708 "firmware": 0, 00:09:17.708 "ns_manage": 0 00:09:17.708 }, 00:09:17.708 "multi_ctrlr": true, 00:09:17.708 "ana_reporting": false 00:09:17.708 }, 00:09:17.708 "vs": { 00:09:17.708 "nvme_version": "1.3" 00:09:17.708 }, 00:09:17.708 "ns_data": { 00:09:17.708 "id": 1, 00:09:17.708 "can_share": true 00:09:17.708 } 00:09:17.708 } 00:09:17.708 ], 00:09:17.708 "mp_policy": "active_passive" 00:09:17.708 } 00:09:17.708 } 00:09:17.708 ] 00:09:17.708 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2437963 00:09:17.708 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:17.708 09:58:02 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.708 Running I/O for 10 seconds... 00:09:19.085 Latency(us) 00:09:19.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.085 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.085 Nvme0n1 : 1.00 34369.00 134.25 0.00 0.00 0.00 0.00 0.00 00:09:19.085 =================================================================================================================== 00:09:19.085 Total : 34369.00 134.25 0.00 0.00 0.00 0.00 0.00 00:09:19.085 00:09:19.652 09:58:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:19.911 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.911 Nvme0n1 : 2.00 34770.50 135.82 0.00 0.00 0.00 0.00 0.00 00:09:19.911 =================================================================================================================== 00:09:19.911 Total : 34770.50 135.82 0.00 0.00 0.00 0.00 0.00 00:09:19.911 00:09:19.911 true 00:09:19.911 09:58:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:19.911 09:58:04 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:20.170 09:58:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:20.170 09:58:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:20.170 09:58:05 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2437963 00:09:20.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.736 Nvme0n1 : 3.00 34892.33 136.30 0.00 0.00 0.00 0.00 0.00 00:09:20.736 =================================================================================================================== 00:09:20.736 Total : 34892.33 136.30 0.00 0.00 0.00 0.00 0.00 00:09:20.736 00:09:21.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.672 Nvme0n1 : 4.00 35016.75 136.78 0.00 0.00 0.00 0.00 0.00 00:09:21.672 =================================================================================================================== 00:09:21.672 Total : 35016.75 136.78 0.00 0.00 0.00 0.00 0.00 00:09:21.672 00:09:23.050 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.050 Nvme0n1 : 5.00 35110.00 137.15 0.00 0.00 0.00 0.00 0.00 00:09:23.050 =================================================================================================================== 00:09:23.050 Total : 35110.00 137.15 0.00 0.00 0.00 0.00 0.00 00:09:23.050 00:09:23.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.986 Nvme0n1 : 6.00 35174.00 137.40 0.00 0.00 0.00 0.00 0.00 00:09:23.986 =================================================================================================================== 00:09:23.986 Total : 35174.00 137.40 0.00 0.00 0.00 0.00 0.00 00:09:23.986 00:09:24.924 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.924 Nvme0n1 : 7.00 35213.43 137.55 0.00 0.00 0.00 0.00 0.00 00:09:24.924 =================================================================================================================== 00:09:24.924 Total : 35213.43 137.55 0.00 0.00 0.00 0.00 0.00 00:09:24.924 00:09:25.860 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:25.860 Nvme0n1 : 8.00 35239.75 137.66 0.00 0.00 0.00 0.00 0.00 00:09:25.860 =================================================================================================================== 00:09:25.860 Total : 35239.75 137.66 0.00 0.00 0.00 0.00 0.00 00:09:25.860 00:09:26.828 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.828 Nvme0n1 : 9.00 35263.67 137.75 0.00 0.00 0.00 0.00 0.00 00:09:26.828 =================================================================================================================== 00:09:26.828 Total : 35263.67 137.75 0.00 0.00 0.00 0.00 0.00 00:09:26.828 00:09:27.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.766 Nvme0n1 : 10.00 35292.40 137.86 0.00 0.00 0.00 0.00 0.00 00:09:27.766 =================================================================================================================== 00:09:27.766 Total : 35292.40 137.86 0.00 0.00 0.00 0.00 0.00 00:09:27.766 00:09:27.766 00:09:27.766 Latency(us) 00:09:27.766 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.766 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.766 Nvme0n1 : 10.00 35293.28 137.86 0.00 0.00 3623.82 2605.84 15915.89 00:09:27.767 =================================================================================================================== 00:09:27.767 Total : 35293.28 137.86 0.00 0.00 3623.82 2605.84 15915.89 00:09:27.767 0 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2437731 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2437731 ']' 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2437731 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2437731 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2437731' 00:09:27.767 killing process with pid 2437731 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2437731 00:09:27.767 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.767 00:09:27.767 Latency(us) 00:09:27.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.767 =================================================================================================================== 00:09:27.767 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.767 09:58:12 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2437731 00:09:28.027 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:28.287 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2434620 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2434620 00:09:28.546 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2434620 Killed "${NVMF_APP[@]}" "$@" 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2439814 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2439814 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2439814 ']' 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:28.546 09:58:13 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:28.805 [2024-07-25 09:58:13.711626] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:28.805 [2024-07-25 09:58:13.711676] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.805 EAL: No free 2048 kB hugepages reported on node 1 00:09:28.805 [2024-07-25 09:58:13.779727] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.805 [2024-07-25 09:58:13.856663] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.805 [2024-07-25 09:58:13.856697] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.805 [2024-07-25 09:58:13.856704] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.805 [2024-07-25 09:58:13.856710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.805 [2024-07-25 09:58:13.856715] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.805 [2024-07-25 09:58:13.856731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.372 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:29.372 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:29.372 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:29.372 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:29.372 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.631 [2024-07-25 09:58:14.701560] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:29.631 [2024-07-25 09:58:14.701640] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:29.631 [2024-07-25 09:58:14.701663] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:29.631 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:29.890 09:58:14 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7114cf4-3704-47c8-a5f0-7388d9b059c0 -t 2000 00:09:30.149 [ 00:09:30.149 { 00:09:30.149 "name": "e7114cf4-3704-47c8-a5f0-7388d9b059c0", 00:09:30.149 "aliases": [ 00:09:30.149 "lvs/lvol" 00:09:30.149 ], 00:09:30.149 "product_name": "Logical Volume", 00:09:30.149 "block_size": 4096, 00:09:30.149 "num_blocks": 38912, 00:09:30.150 "uuid": "e7114cf4-3704-47c8-a5f0-7388d9b059c0", 00:09:30.150 "assigned_rate_limits": { 00:09:30.150 "rw_ios_per_sec": 0, 00:09:30.150 "rw_mbytes_per_sec": 0, 00:09:30.150 "r_mbytes_per_sec": 0, 00:09:30.150 "w_mbytes_per_sec": 0 00:09:30.150 }, 00:09:30.150 "claimed": false, 00:09:30.150 "zoned": false, 00:09:30.150 "supported_io_types": { 00:09:30.150 "read": true, 00:09:30.150 "write": true, 00:09:30.150 "unmap": true, 00:09:30.150 "flush": false, 00:09:30.150 "reset": true, 00:09:30.150 "nvme_admin": false, 00:09:30.150 "nvme_io": false, 00:09:30.150 "nvme_io_md": false, 00:09:30.150 "write_zeroes": true, 00:09:30.150 "zcopy": false, 00:09:30.150 "get_zone_info": false, 00:09:30.150 "zone_management": false, 00:09:30.150 "zone_append": false, 00:09:30.150 "compare": false, 00:09:30.150 "compare_and_write": false, 00:09:30.150 "abort": false, 00:09:30.150 "seek_hole": true, 00:09:30.150 "seek_data": true, 00:09:30.150 "copy": false, 00:09:30.150 "nvme_iov_md": false 00:09:30.150 }, 00:09:30.150 "driver_specific": { 00:09:30.150 "lvol": { 00:09:30.150 "lvol_store_uuid": "c1a10322-d8d2-4f52-b982-cd3433bc1370", 00:09:30.150 "base_bdev": "aio_bdev", 00:09:30.150 "thin_provision": false, 00:09:30.150 "num_allocated_clusters": 38, 00:09:30.150 "snapshot": false, 00:09:30.150 "clone": false, 00:09:30.150 "esnap_clone": false 00:09:30.150 } 00:09:30.150 } 00:09:30.150 } 00:09:30.150 ] 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:30.150 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:30.409 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:30.409 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:30.409 [2024-07-25 09:58:15.562214] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.668 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:30.669 request: 00:09:30.669 { 00:09:30.669 "uuid": "c1a10322-d8d2-4f52-b982-cd3433bc1370", 00:09:30.669 "method": "bdev_lvol_get_lvstores", 00:09:30.669 "req_id": 1 00:09:30.669 } 00:09:30.669 Got JSON-RPC error response 00:09:30.669 response: 00:09:30.669 { 00:09:30.669 "code": -19, 00:09:30.669 "message": "No such device" 00:09:30.669 } 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:30.669 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:30.928 aio_bdev 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:30.928 09:58:15 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:31.187 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7114cf4-3704-47c8-a5f0-7388d9b059c0 -t 2000 00:09:31.187 [ 00:09:31.187 { 00:09:31.187 "name": "e7114cf4-3704-47c8-a5f0-7388d9b059c0", 00:09:31.187 "aliases": [ 00:09:31.187 "lvs/lvol" 00:09:31.187 ], 00:09:31.187 "product_name": "Logical Volume", 00:09:31.187 "block_size": 4096, 00:09:31.187 "num_blocks": 38912, 00:09:31.187 "uuid": "e7114cf4-3704-47c8-a5f0-7388d9b059c0", 00:09:31.187 "assigned_rate_limits": { 00:09:31.187 "rw_ios_per_sec": 0, 00:09:31.187 "rw_mbytes_per_sec": 0, 00:09:31.187 "r_mbytes_per_sec": 0, 00:09:31.187 "w_mbytes_per_sec": 0 00:09:31.187 }, 00:09:31.187 "claimed": false, 00:09:31.187 "zoned": false, 00:09:31.187 "supported_io_types": { 00:09:31.187 "read": true, 00:09:31.187 "write": true, 00:09:31.187 "unmap": true, 00:09:31.187 "flush": false, 00:09:31.187 "reset": true, 00:09:31.187 "nvme_admin": false, 00:09:31.187 "nvme_io": false, 00:09:31.187 "nvme_io_md": false, 00:09:31.187 "write_zeroes": true, 00:09:31.187 "zcopy": false, 00:09:31.187 "get_zone_info": false, 00:09:31.187 "zone_management": false, 00:09:31.187 "zone_append": false, 00:09:31.187 "compare": false, 00:09:31.187 "compare_and_write": false, 00:09:31.187 "abort": false, 00:09:31.187 "seek_hole": true, 00:09:31.187 "seek_data": true, 00:09:31.187 "copy": false, 00:09:31.187 "nvme_iov_md": false 00:09:31.187 }, 00:09:31.187 "driver_specific": { 00:09:31.187 "lvol": { 00:09:31.187 "lvol_store_uuid": "c1a10322-d8d2-4f52-b982-cd3433bc1370", 00:09:31.187 "base_bdev": "aio_bdev", 00:09:31.187 "thin_provision": false, 00:09:31.187 "num_allocated_clusters": 38, 00:09:31.187 "snapshot": false, 00:09:31.187 "clone": false, 00:09:31.187 "esnap_clone": false 00:09:31.187 } 00:09:31.187 } 00:09:31.187 } 00:09:31.187 ] 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:31.447 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:31.707 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:31.707 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7114cf4-3704-47c8-a5f0-7388d9b059c0 00:09:31.707 09:58:16 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1a10322-d8d2-4f52-b982-cd3433bc1370 00:09:31.966 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:32.226 00:09:32.226 real 0m17.483s 00:09:32.226 user 0m45.696s 00:09:32.226 sys 0m2.788s 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.226 ************************************ 00:09:32.226 END TEST lvs_grow_dirty 00:09:32.226 ************************************ 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:32.226 nvmf_trace.0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:32.226 rmmod nvme_rdma 00:09:32.226 rmmod nvme_fabrics 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2439814 ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2439814 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2439814 ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2439814 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:32.226 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2439814 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2439814' 00:09:32.485 killing process with pid 2439814 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2439814 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2439814 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:32.485 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:32.485 00:09:32.485 real 0m40.560s 00:09:32.485 user 1m7.383s 00:09:32.485 sys 0m8.548s 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:32.486 ************************************ 00:09:32.486 END TEST nvmf_lvs_grow 00:09:32.486 ************************************ 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:32.486 09:58:17 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:32.745 ************************************ 00:09:32.745 START TEST nvmf_bdev_io_wait 00:09:32.745 ************************************ 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:09:32.745 * Looking for test storage... 00:09:32.745 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.745 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:32.746 09:58:17 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:39.317 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:39.318 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:39.318 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:39.318 Found net devices under 0000:da:00.0: mlx_0_0 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:39.318 Found net devices under 0000:da:00.1: mlx_0_1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@420 -- # rdma_device_init 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # uname 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:39.318 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:39.318 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:39.318 altname enp218s0f0np0 00:09:39.318 altname ens818f0np0 00:09:39.318 inet 192.168.100.8/24 scope global mlx_0_0 00:09:39.318 valid_lft forever preferred_lft forever 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:39.318 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:39.318 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:39.319 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:39.319 altname enp218s0f1np1 00:09:39.319 altname ens818f1np1 00:09:39.319 inet 192.168.100.9/24 scope global mlx_0_1 00:09:39.319 valid_lft forever preferred_lft forever 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # continue 2 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:39.319 192.168.100.9' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:39.319 192.168.100.9' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # head -n 1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:39.319 192.168.100.9' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # tail -n +2 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # head -n 1 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2443627 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2443627 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2443627 ']' 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:39.319 09:58:23 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.319 [2024-07-25 09:58:23.561915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.319 [2024-07-25 09:58:23.561964] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.319 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.319 [2024-07-25 09:58:23.629601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:39.319 [2024-07-25 09:58:23.711055] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.319 [2024-07-25 09:58:23.711097] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.319 [2024-07-25 09:58:23.711104] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:39.319 [2024-07-25 09:58:23.711112] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:39.319 [2024-07-25 09:58:23.711117] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.319 [2024-07-25 09:58:23.711204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:39.319 [2024-07-25 09:58:23.711309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:39.319 [2024-07-25 09:58:23.711393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.319 [2024-07-25 09:58:23.711394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.319 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 [2024-07-25 09:58:24.504167] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x906b20/0x90b010) succeed. 00:09:39.579 [2024-07-25 09:58:24.512941] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x908160/0x94c6a0) succeed. 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 Malloc0 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:39.579 [2024-07-25 09:58:24.687051] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2443878 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2443880 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.579 { 00:09:39.579 "params": { 00:09:39.579 "name": "Nvme$subsystem", 00:09:39.579 "trtype": "$TEST_TRANSPORT", 00:09:39.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.579 "adrfam": "ipv4", 00:09:39.579 "trsvcid": "$NVMF_PORT", 00:09:39.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.579 "hdgst": ${hdgst:-false}, 00:09:39.579 "ddgst": ${ddgst:-false} 00:09:39.579 }, 00:09:39.579 "method": "bdev_nvme_attach_controller" 00:09:39.579 } 00:09:39.579 EOF 00:09:39.579 )") 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2443882 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.579 { 00:09:39.579 "params": { 00:09:39.579 "name": "Nvme$subsystem", 00:09:39.579 "trtype": "$TEST_TRANSPORT", 00:09:39.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.579 "adrfam": "ipv4", 00:09:39.579 "trsvcid": "$NVMF_PORT", 00:09:39.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.579 "hdgst": ${hdgst:-false}, 00:09:39.579 "ddgst": ${ddgst:-false} 00:09:39.579 }, 00:09:39.579 "method": "bdev_nvme_attach_controller" 00:09:39.579 } 00:09:39.579 EOF 00:09:39.579 )") 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2443885 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:39.579 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.580 { 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme$subsystem", 00:09:39.580 "trtype": "$TEST_TRANSPORT", 00:09:39.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "$NVMF_PORT", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.580 "hdgst": ${hdgst:-false}, 00:09:39.580 "ddgst": ${ddgst:-false} 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 } 00:09:39.580 EOF 00:09:39.580 )") 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:39.580 { 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme$subsystem", 00:09:39.580 "trtype": "$TEST_TRANSPORT", 00:09:39.580 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "$NVMF_PORT", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:39.580 "hdgst": ${hdgst:-false}, 00:09:39.580 "ddgst": ${ddgst:-false} 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 } 00:09:39.580 EOF 00:09:39.580 )") 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2443878 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme1", 00:09:39.580 "trtype": "rdma", 00:09:39.580 "traddr": "192.168.100.8", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "4420", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.580 "hdgst": false, 00:09:39.580 "ddgst": false 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 }' 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme1", 00:09:39.580 "trtype": "rdma", 00:09:39.580 "traddr": "192.168.100.8", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "4420", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.580 "hdgst": false, 00:09:39.580 "ddgst": false 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 }' 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme1", 00:09:39.580 "trtype": "rdma", 00:09:39.580 "traddr": "192.168.100.8", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "4420", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.580 "hdgst": false, 00:09:39.580 "ddgst": false 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 }' 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:39.580 09:58:24 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:39.580 "params": { 00:09:39.580 "name": "Nvme1", 00:09:39.580 "trtype": "rdma", 00:09:39.580 "traddr": "192.168.100.8", 00:09:39.580 "adrfam": "ipv4", 00:09:39.580 "trsvcid": "4420", 00:09:39.580 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:39.580 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:39.580 "hdgst": false, 00:09:39.580 "ddgst": false 00:09:39.580 }, 00:09:39.580 "method": "bdev_nvme_attach_controller" 00:09:39.580 }' 00:09:39.580 [2024-07-25 09:58:24.733109] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.580 [2024-07-25 09:58:24.733167] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:39.580 [2024-07-25 09:58:24.734431] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.580 [2024-07-25 09:58:24.734469] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:39.580 [2024-07-25 09:58:24.735669] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.580 [2024-07-25 09:58:24.735707] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:39.580 [2024-07-25 09:58:24.737579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:39.580 [2024-07-25 09:58:24.737615] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:39.839 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.839 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.839 [2024-07-25 09:58:24.902242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.839 EAL: No free 2048 kB hugepages reported on node 1 00:09:39.839 [2024-07-25 09:58:24.978525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:39.839 [2024-07-25 09:58:24.992773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.098 EAL: No free 2048 kB hugepages reported on node 1 00:09:40.098 [2024-07-25 09:58:25.065440] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:40.098 [2024-07-25 09:58:25.093206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.098 [2024-07-25 09:58:25.170358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:40.098 [2024-07-25 09:58:25.193735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:40.356 [2024-07-25 09:58:25.287614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:40.356 Running I/O for 1 seconds... 00:09:40.356 Running I/O for 1 seconds... 00:09:40.356 Running I/O for 1 seconds... 00:09:40.356 Running I/O for 1 seconds... 00:09:41.290 00:09:41.290 Latency(us) 00:09:41.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.290 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:41.290 Nvme1n1 : 1.00 19685.63 76.90 0.00 0.00 6484.69 4337.86 18474.91 00:09:41.290 =================================================================================================================== 00:09:41.290 Total : 19685.63 76.90 0.00 0.00 6484.69 4337.86 18474.91 00:09:41.290 00:09:41.290 Latency(us) 00:09:41.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.290 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:41.290 Nvme1n1 : 1.01 15172.65 59.27 0.00 0.00 8407.61 5804.62 17226.61 00:09:41.290 =================================================================================================================== 00:09:41.290 Total : 15172.65 59.27 0.00 0.00 8407.61 5804.62 17226.61 00:09:41.290 00:09:41.290 Latency(us) 00:09:41.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.290 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:41.290 Nvme1n1 : 1.00 14832.59 57.94 0.00 0.00 8609.25 3744.91 18724.57 00:09:41.290 =================================================================================================================== 00:09:41.290 Total : 14832.59 57.94 0.00 0.00 8609.25 3744.91 18724.57 00:09:41.290 00:09:41.290 Latency(us) 00:09:41.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:41.290 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:41.290 Nvme1n1 : 1.00 254790.77 995.28 0.00 0.00 500.33 202.85 1880.26 00:09:41.290 =================================================================================================================== 00:09:41.290 Total : 254790.77 995.28 0.00 0.00 500.33 202.85 1880.26 00:09:41.548 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2443880 00:09:41.548 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2443882 00:09:41.548 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2443885 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:09:41.807 rmmod nvme_rdma 00:09:41.807 rmmod nvme_fabrics 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2443627 ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2443627 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2443627 ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2443627 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2443627 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2443627' 00:09:41.807 killing process with pid 2443627 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2443627 00:09:41.807 09:58:26 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2443627 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:09:42.066 00:09:42.066 real 0m9.429s 00:09:42.066 user 0m20.714s 00:09:42.066 sys 0m5.602s 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:42.066 ************************************ 00:09:42.066 END TEST nvmf_bdev_io_wait 00:09:42.066 ************************************ 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:42.066 ************************************ 00:09:42.066 START TEST nvmf_queue_depth 00:09:42.066 ************************************ 00:09:42.066 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:09:42.326 * Looking for test storage... 00:09:42.326 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.326 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:42.327 09:58:27 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:09:47.631 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:09:47.631 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:09:47.631 Found net devices under 0000:da:00.0: mlx_0_0 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:09:47.631 Found net devices under 0000:da:00.1: mlx_0_1 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.631 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@420 -- # rdma_device_init 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # uname 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@62 -- # modprobe ib_cm 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@63 -- # modprobe ib_core 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@64 -- # modprobe ib_umad 00:09:47.632 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@66 -- # modprobe iw_cm 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # allocate_nic_ips 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # get_rdma_if_list 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.891 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:09:47.892 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.892 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:09:47.892 altname enp218s0f0np0 00:09:47.892 altname ens818f0np0 00:09:47.892 inet 192.168.100.8/24 scope global mlx_0_0 00:09:47.892 valid_lft forever preferred_lft forever 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:09:47.892 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:47.892 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:09:47.892 altname enp218s0f1np1 00:09:47.892 altname ens818f1np1 00:09:47.892 inet 192.168.100.9/24 scope global mlx_0_1 00:09:47.892 valid_lft forever preferred_lft forever 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # get_rdma_if_list 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # echo mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # continue 2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # awk '{print $4}' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@113 -- # cut -d/ -f1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:09:47.892 192.168.100.9' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:09:47.892 192.168.100.9' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # head -n 1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:09:47.892 192.168.100.9' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # tail -n +2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # head -n 1 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2447426 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2447426 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2447426 ']' 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.892 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.893 09:58:32 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:47.893 [2024-07-25 09:58:33.014752] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:47.893 [2024-07-25 09:58:33.014797] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.893 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.151 [2024-07-25 09:58:33.083137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.152 [2024-07-25 09:58:33.159761] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.152 [2024-07-25 09:58:33.159794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.152 [2024-07-25 09:58:33.159801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.152 [2024-07-25 09:58:33.159806] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.152 [2024-07-25 09:58:33.159811] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.152 [2024-07-25 09:58:33.159833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.716 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.717 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.717 [2024-07-25 09:58:33.866615] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc4cc20/0xc51110) succeed. 00:09:48.717 [2024-07-25 09:58:33.875567] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc4e120/0xc927a0) succeed. 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 Malloc0 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 [2024-07-25 09:58:33.979701] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2447572 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2447572 /var/tmp/bdevperf.sock 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2447572 ']' 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.975 09:58:33 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.975 [2024-07-25 09:58:34.024477] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:48.975 [2024-07-25 09:58:34.024517] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2447572 ] 00:09:48.975 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.975 [2024-07-25 09:58:34.092712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.234 [2024-07-25 09:58:34.171918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.800 NVMe0n1 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.800 09:58:34 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.058 Running I/O for 10 seconds... 00:10:00.026 00:10:00.026 Latency(us) 00:10:00.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.026 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:00.026 Verification LBA range: start 0x0 length 0x4000 00:10:00.026 NVMe0n1 : 10.05 17635.61 68.89 0.00 0.00 57920.42 22843.98 36450.50 00:10:00.026 =================================================================================================================== 00:10:00.026 Total : 17635.61 68.89 0.00 0.00 57920.42 22843.98 36450.50 00:10:00.026 0 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2447572 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2447572 ']' 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2447572 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447572 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447572' 00:10:00.026 killing process with pid 2447572 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2447572 00:10:00.026 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.026 00:10:00.026 Latency(us) 00:10:00.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.026 =================================================================================================================== 00:10:00.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.026 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2447572 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:00.285 rmmod nvme_rdma 00:10:00.285 rmmod nvme_fabrics 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2447426 ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2447426 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2447426 ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2447426 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2447426 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2447426' 00:10:00.285 killing process with pid 2447426 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2447426 00:10:00.285 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2447426 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:00.544 00:10:00.544 real 0m18.506s 00:10:00.544 user 0m25.817s 00:10:00.544 sys 0m4.982s 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.544 ************************************ 00:10:00.544 END TEST nvmf_queue_depth 00:10:00.544 ************************************ 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.544 09:58:45 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.803 ************************************ 00:10:00.803 START TEST nvmf_target_multipath 00:10:00.803 ************************************ 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:10:00.803 * Looking for test storage... 00:10:00.803 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.803 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.804 09:58:45 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:07.376 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:07.376 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:07.377 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:07.377 Found net devices under 0000:da:00.0: mlx_0_0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:07.377 Found net devices under 0000:da:00.1: mlx_0_1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@420 -- # rdma_device_init 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # uname 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:07.377 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:07.377 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:07.377 altname enp218s0f0np0 00:10:07.377 altname ens818f0np0 00:10:07.377 inet 192.168.100.8/24 scope global mlx_0_0 00:10:07.377 valid_lft forever preferred_lft forever 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:07.377 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:07.377 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:07.377 altname enp218s0f1np1 00:10:07.377 altname ens818f1np1 00:10:07.377 inet 192.168.100.9/24 scope global mlx_0_1 00:10:07.377 valid_lft forever preferred_lft forever 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:07.377 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # continue 2 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:07.378 192.168.100.9' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:07.378 192.168.100.9' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # head -n 1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:07.378 192.168.100.9' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # tail -n +2 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # head -n 1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:10:07.378 run this test only with TCP transport for now 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@53 -- # nvmftestfini 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:07.378 rmmod nvme_rdma 00:10:07.378 rmmod nvme_fabrics 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@54 -- # exit 0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:07.378 00:10:07.378 real 0m5.898s 00:10:07.378 user 0m1.674s 00:10:07.378 sys 0m4.361s 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:07.378 ************************************ 00:10:07.378 END TEST nvmf_target_multipath 00:10:07.378 ************************************ 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.378 ************************************ 00:10:07.378 START TEST nvmf_zcopy 00:10:07.378 ************************************ 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:10:07.378 * Looking for test storage... 00:10:07.378 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:07.378 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:07.379 09:58:51 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.666 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.666 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:12.666 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:12.666 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:12.666 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:12.667 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:12.667 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:12.667 Found net devices under 0000:da:00.0: mlx_0_0 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:12.667 Found net devices under 0000:da:00.1: mlx_0_1 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@420 -- # rdma_device_init 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # uname 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.667 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:12.668 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.668 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:12.668 altname enp218s0f0np0 00:10:12.668 altname ens818f0np0 00:10:12.668 inet 192.168.100.8/24 scope global mlx_0_0 00:10:12.668 valid_lft forever preferred_lft forever 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:12.668 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:12.668 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:12.668 altname enp218s0f1np1 00:10:12.668 altname ens818f1np1 00:10:12.668 inet 192.168.100.9/24 scope global mlx_0_1 00:10:12.668 valid_lft forever preferred_lft forever 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # continue 2 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:12.668 192.168.100.9' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:12.668 192.168.100.9' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # head -n 1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:12.668 192.168.100.9' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # tail -n +2 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # head -n 1 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:12.668 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2455647 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2455647 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2455647 ']' 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:12.669 09:58:57 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:12.669 [2024-07-25 09:58:57.587641] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:12.669 [2024-07-25 09:58:57.587700] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.669 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.669 [2024-07-25 09:58:57.654743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.669 [2024-07-25 09:58:57.733474] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.669 [2024-07-25 09:58:57.733509] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.669 [2024-07-25 09:58:57.733516] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.669 [2024-07-25 09:58:57.733522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.669 [2024-07-25 09:58:57.733527] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.669 [2024-07-25 09:58:57.733544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:13.236 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:13.236 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:13.236 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.236 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:13.236 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:10:13.494 Unsupported transport: rdma 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # exit 0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # process_shm --id 0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@808 -- # type=--id 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@809 -- # id=0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@820 -- # for n in $shm_files 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:13.494 nvmf_trace.0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@823 -- # return 0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@1 -- # nvmftestfini 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:13.494 rmmod nvme_rdma 00:10:13.494 rmmod nvme_fabrics 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2455647 ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2455647 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2455647 ']' 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2455647 00:10:13.494 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2455647 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2455647' 00:10:13.495 killing process with pid 2455647 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2455647 00:10:13.495 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2455647 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:13.754 00:10:13.754 real 0m7.033s 00:10:13.754 user 0m3.061s 00:10:13.754 sys 0m4.613s 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:13.754 ************************************ 00:10:13.754 END TEST nvmf_zcopy 00:10:13.754 ************************************ 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:13.754 ************************************ 00:10:13.754 START TEST nvmf_nmic 00:10:13.754 ************************************ 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:10:13.754 * Looking for test storage... 00:10:13.754 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.754 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:14.013 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.014 09:58:58 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:19.331 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:19.331 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:19.331 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:19.332 Found net devices under 0000:da:00.0: mlx_0_0 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:19.332 Found net devices under 0000:da:00.1: mlx_0_1 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@420 -- # rdma_device_init 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # uname 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.332 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:19.592 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.592 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:19.592 altname enp218s0f0np0 00:10:19.592 altname ens818f0np0 00:10:19.592 inet 192.168.100.8/24 scope global mlx_0_0 00:10:19.592 valid_lft forever preferred_lft forever 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:19.592 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:19.592 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:19.592 altname enp218s0f1np1 00:10:19.592 altname ens818f1np1 00:10:19.592 inet 192.168.100.9/24 scope global mlx_0_1 00:10:19.592 valid_lft forever preferred_lft forever 00:10:19.592 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # continue 2 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:19.593 192.168.100.9' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:19.593 192.168.100.9' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # head -n 1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:19.593 192.168.100.9' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # tail -n +2 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # head -n 1 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2458964 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2458964 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2458964 ']' 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.593 09:59:04 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:19.593 [2024-07-25 09:59:04.692500] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:19.593 [2024-07-25 09:59:04.692552] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:19.593 EAL: No free 2048 kB hugepages reported on node 1 00:10:19.852 [2024-07-25 09:59:04.760498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:19.852 [2024-07-25 09:59:04.843064] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:19.852 [2024-07-25 09:59:04.843100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:19.852 [2024-07-25 09:59:04.843107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:19.852 [2024-07-25 09:59:04.843113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:19.852 [2024-07-25 09:59:04.843118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:19.852 [2024-07-25 09:59:04.843182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:19.853 [2024-07-25 09:59:04.843292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:19.853 [2024-07-25 09:59:04.843318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.853 [2024-07-25 09:59:04.843319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.420 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.420 [2024-07-25 09:59:05.562535] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ba3cc0/0x1ba81b0) succeed. 00:10:20.420 [2024-07-25 09:59:05.571645] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ba5300/0x1be9840) succeed. 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.679 Malloc0 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.679 [2024-07-25 09:59:05.736770] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:20.679 test case1: single bdev can't be used in multiple subsystems 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:20.679 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 [2024-07-25 09:59:05.760526] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:20.680 [2024-07-25 09:59:05.760546] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:20.680 [2024-07-25 09:59:05.760554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:20.680 request: 00:10:20.680 { 00:10:20.680 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:20.680 "namespace": { 00:10:20.680 "bdev_name": "Malloc0", 00:10:20.680 "no_auto_visible": false 00:10:20.680 }, 00:10:20.680 "method": "nvmf_subsystem_add_ns", 00:10:20.680 "req_id": 1 00:10:20.680 } 00:10:20.680 Got JSON-RPC error response 00:10:20.680 response: 00:10:20.680 { 00:10:20.680 "code": -32602, 00:10:20.680 "message": "Invalid parameters" 00:10:20.680 } 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:20.680 Adding namespace failed - expected result. 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:20.680 test case2: host connect to nvmf target in multiple paths 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:20.680 [2024-07-25 09:59:05.772583] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.680 09:59:05 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:22.056 09:59:06 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:10:22.623 09:59:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.623 09:59:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.623 09:59:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.623 09:59:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.623 09:59:07 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:25.154 09:59:09 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:25.154 [global] 00:10:25.154 thread=1 00:10:25.154 invalidate=1 00:10:25.154 rw=write 00:10:25.154 time_based=1 00:10:25.154 runtime=1 00:10:25.154 ioengine=libaio 00:10:25.154 direct=1 00:10:25.154 bs=4096 00:10:25.154 iodepth=1 00:10:25.154 norandommap=0 00:10:25.154 numjobs=1 00:10:25.154 00:10:25.154 verify_dump=1 00:10:25.154 verify_backlog=512 00:10:25.154 verify_state_save=0 00:10:25.154 do_verify=1 00:10:25.154 verify=crc32c-intel 00:10:25.154 [job0] 00:10:25.154 filename=/dev/nvme0n1 00:10:25.154 Could not set queue depth (nvme0n1) 00:10:25.154 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:25.154 fio-3.35 00:10:25.154 Starting 1 thread 00:10:26.089 00:10:26.089 job0: (groupid=0, jobs=1): err= 0: pid=2460027: Thu Jul 25 09:59:11 2024 00:10:26.089 read: IOPS=7251, BW=28.3MiB/s (29.7MB/s)(28.4MiB/1001msec) 00:10:26.089 slat (nsec): min=6170, max=30652, avg=6918.79, stdev=739.31 00:10:26.089 clat (nsec): min=43466, max=86591, avg=58468.99, stdev=3866.16 00:10:26.089 lat (usec): min=56, max=114, avg=65.39, stdev= 3.95 00:10:26.089 clat percentiles (nsec): 00:10:26.089 | 1.00th=[50944], 5.00th=[52480], 10.00th=[53504], 20.00th=[55040], 00:10:26.089 | 30.00th=[56064], 40.00th=[57088], 50.00th=[58112], 60.00th=[59648], 00:10:26.089 | 70.00th=[60672], 80.00th=[61696], 90.00th=[63232], 95.00th=[64768], 00:10:26.089 | 99.00th=[68096], 99.50th=[69120], 99.90th=[75264], 99.95th=[82432], 00:10:26.089 | 99.99th=[86528] 00:10:26.089 write: IOPS=7672, BW=30.0MiB/s (31.4MB/s)(30.0MiB/1001msec); 0 zone resets 00:10:26.089 slat (nsec): min=7888, max=39008, avg=8801.05, stdev=1012.74 00:10:26.089 clat (usec): min=39, max=104, avg=55.91, stdev= 3.92 00:10:26.089 lat (usec): min=55, max=143, avg=64.71, stdev= 4.06 00:10:26.089 clat percentiles (usec): 00:10:26.089 | 1.00th=[ 49], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:10:26.089 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:10:26.089 | 70.00th=[ 58], 80.00th=[ 60], 90.00th=[ 62], 95.00th=[ 63], 00:10:26.089 | 99.00th=[ 67], 99.50th=[ 68], 99.90th=[ 73], 99.95th=[ 82], 00:10:26.089 | 99.99th=[ 105] 00:10:26.089 bw ( KiB/s): min=30880, max=30880, per=100.00%, avg=30880.00, stdev= 0.00, samples=1 00:10:26.089 iops : min= 7720, max= 7720, avg=7720.00, stdev= 0.00, samples=1 00:10:26.089 lat (usec) : 50=2.07%, 100=97.92%, 250=0.01% 00:10:26.089 cpu : usr=8.30%, sys=15.50%, ctx=14939, majf=0, minf=2 00:10:26.089 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.089 issued rwts: total=7259,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.089 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.089 00:10:26.089 Run status group 0 (all jobs): 00:10:26.089 READ: bw=28.3MiB/s (29.7MB/s), 28.3MiB/s-28.3MiB/s (29.7MB/s-29.7MB/s), io=28.4MiB (29.7MB), run=1001-1001msec 00:10:26.089 WRITE: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=30.0MiB (31.5MB), run=1001-1001msec 00:10:26.089 00:10:26.089 Disk stats (read/write): 00:10:26.089 nvme0n1: ios=6706/6755, merge=0/0, ticks=356/327, in_queue=683, util=90.68% 00:10:26.089 09:59:11 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:27.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:27.989 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:27.989 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:27.989 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:27.989 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:28.247 rmmod nvme_rdma 00:10:28.247 rmmod nvme_fabrics 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2458964 ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2458964 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2458964 ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2458964 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2458964 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2458964' 00:10:28.247 killing process with pid 2458964 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2458964 00:10:28.247 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2458964 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:28.505 00:10:28.505 real 0m14.771s 00:10:28.505 user 0m41.893s 00:10:28.505 sys 0m5.031s 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:28.505 ************************************ 00:10:28.505 END TEST nvmf_nmic 00:10:28.505 ************************************ 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:28.505 ************************************ 00:10:28.505 START TEST nvmf_fio_target 00:10:28.505 ************************************ 00:10:28.505 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:10:28.764 * Looking for test storage... 00:10:28.764 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.765 09:59:13 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:10:35.335 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:10:35.335 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:35.335 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:10:35.336 Found net devices under 0000:da:00.0: mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:10:35.336 Found net devices under 0000:da:00.1: mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@420 -- # rdma_device_init 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # uname 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:10:35.336 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:35.336 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:10:35.336 altname enp218s0f0np0 00:10:35.336 altname ens818f0np0 00:10:35.336 inet 192.168.100.8/24 scope global mlx_0_0 00:10:35.336 valid_lft forever preferred_lft forever 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:10:35.336 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:10:35.336 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:10:35.336 altname enp218s0f1np1 00:10:35.336 altname ens818f1np1 00:10:35.336 inet 192.168.100.9/24 scope global mlx_0_1 00:10:35.336 valid_lft forever preferred_lft forever 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # continue 2 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:10:35.336 192.168.100.9' 00:10:35.336 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:10:35.337 192.168.100.9' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # head -n 1 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:10:35.337 192.168.100.9' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # tail -n +2 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # head -n 1 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2463806 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2463806 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2463806 ']' 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:35.337 09:59:19 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.337 [2024-07-25 09:59:19.549103] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:35.337 [2024-07-25 09:59:19.549156] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.337 EAL: No free 2048 kB hugepages reported on node 1 00:10:35.337 [2024-07-25 09:59:19.616867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.337 [2024-07-25 09:59:19.696445] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.337 [2024-07-25 09:59:19.696480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.337 [2024-07-25 09:59:19.696487] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.337 [2024-07-25 09:59:19.696493] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.337 [2024-07-25 09:59:19.696498] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.337 [2024-07-25 09:59:19.696563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.337 [2024-07-25 09:59:19.696668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.337 [2024-07-25 09:59:19.696776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.337 [2024-07-25 09:59:19.696777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.337 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:10:35.595 [2024-07-25 09:59:20.578038] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1e04cc0/0x1e091b0) succeed. 00:10:35.595 [2024-07-25 09:59:20.587325] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1e06300/0x1e4a840) succeed. 00:10:35.595 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:35.854 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:35.854 09:59:20 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.112 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:36.112 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.370 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:36.370 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.370 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:36.371 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:36.629 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:36.888 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:36.888 09:59:21 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.147 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:37.147 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:37.147 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:37.147 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:37.406 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:37.664 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.664 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:37.664 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:37.664 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:37.942 09:59:22 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:10:38.243 [2024-07-25 09:59:23.173926] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:10:38.243 09:59:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:38.243 09:59:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:38.501 09:59:23 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:39.437 09:59:24 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:41.970 09:59:26 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:41.970 [global] 00:10:41.970 thread=1 00:10:41.970 invalidate=1 00:10:41.970 rw=write 00:10:41.970 time_based=1 00:10:41.970 runtime=1 00:10:41.970 ioengine=libaio 00:10:41.970 direct=1 00:10:41.970 bs=4096 00:10:41.970 iodepth=1 00:10:41.970 norandommap=0 00:10:41.970 numjobs=1 00:10:41.970 00:10:41.970 verify_dump=1 00:10:41.970 verify_backlog=512 00:10:41.970 verify_state_save=0 00:10:41.970 do_verify=1 00:10:41.970 verify=crc32c-intel 00:10:41.970 [job0] 00:10:41.970 filename=/dev/nvme0n1 00:10:41.970 [job1] 00:10:41.970 filename=/dev/nvme0n2 00:10:41.970 [job2] 00:10:41.970 filename=/dev/nvme0n3 00:10:41.970 [job3] 00:10:41.970 filename=/dev/nvme0n4 00:10:41.970 Could not set queue depth (nvme0n1) 00:10:41.970 Could not set queue depth (nvme0n2) 00:10:41.970 Could not set queue depth (nvme0n3) 00:10:41.970 Could not set queue depth (nvme0n4) 00:10:41.970 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.970 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.970 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.970 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:41.970 fio-3.35 00:10:41.970 Starting 4 threads 00:10:43.349 00:10:43.349 job0: (groupid=0, jobs=1): err= 0: pid=2465168: Thu Jul 25 09:59:28 2024 00:10:43.349 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:43.349 slat (nsec): min=5870, max=23303, avg=6748.54, stdev=780.90 00:10:43.349 clat (usec): min=64, max=174, avg=86.21, stdev=17.68 00:10:43.349 lat (usec): min=70, max=180, avg=92.96, stdev=17.77 00:10:43.349 clat percentiles (usec): 00:10:43.349 | 1.00th=[ 69], 5.00th=[ 71], 10.00th=[ 73], 20.00th=[ 75], 00:10:43.349 | 30.00th=[ 77], 40.00th=[ 78], 50.00th=[ 80], 60.00th=[ 82], 00:10:43.349 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 118], 95.00th=[ 125], 00:10:43.349 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 167], 99.95th=[ 169], 00:10:43.349 | 99.99th=[ 176] 00:10:43.349 write: IOPS=5569, BW=21.8MiB/s (22.8MB/s)(21.8MiB/1001msec); 0 zone resets 00:10:43.349 slat (nsec): min=7489, max=36158, avg=8639.00, stdev=803.51 00:10:43.349 clat (usec): min=57, max=261, avg=81.73, stdev=16.73 00:10:43.349 lat (usec): min=65, max=270, avg=90.36, stdev=16.81 00:10:43.349 clat percentiles (usec): 00:10:43.349 | 1.00th=[ 64], 5.00th=[ 67], 10.00th=[ 69], 20.00th=[ 71], 00:10:43.349 | 30.00th=[ 73], 40.00th=[ 74], 50.00th=[ 76], 60.00th=[ 78], 00:10:43.349 | 70.00th=[ 82], 80.00th=[ 94], 90.00th=[ 110], 95.00th=[ 117], 00:10:43.349 | 99.00th=[ 137], 99.50th=[ 147], 99.90th=[ 155], 99.95th=[ 163], 00:10:43.349 | 99.99th=[ 262] 00:10:43.349 bw ( KiB/s): min=20480, max=20480, per=25.32%, avg=20480.00, stdev= 0.00, samples=1 00:10:43.349 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:43.349 lat (usec) : 100=82.38%, 250=17.61%, 500=0.01% 00:10:43.349 cpu : usr=6.30%, sys=10.60%, ctx=10695, majf=0, minf=1 00:10:43.349 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.349 issued rwts: total=5120,5575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.349 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.350 job1: (groupid=0, jobs=1): err= 0: pid=2465169: Thu Jul 25 09:59:28 2024 00:10:43.350 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:10:43.350 slat (nsec): min=5952, max=25737, avg=6879.49, stdev=708.37 00:10:43.350 clat (usec): min=63, max=176, avg=86.93, stdev=17.21 00:10:43.350 lat (usec): min=72, max=183, avg=93.81, stdev=17.27 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 71], 5.00th=[ 73], 10.00th=[ 74], 20.00th=[ 76], 00:10:43.350 | 30.00th=[ 78], 40.00th=[ 79], 50.00th=[ 81], 60.00th=[ 83], 00:10:43.350 | 70.00th=[ 86], 80.00th=[ 95], 90.00th=[ 118], 95.00th=[ 124], 00:10:43.350 | 99.00th=[ 143], 99.50th=[ 155], 99.90th=[ 165], 99.95th=[ 169], 00:10:43.350 | 99.99th=[ 178] 00:10:43.350 write: IOPS=5444, BW=21.3MiB/s (22.3MB/s)(21.3MiB/1001msec); 0 zone resets 00:10:43.350 slat (nsec): min=7949, max=77782, avg=8811.72, stdev=1317.63 00:10:43.350 clat (usec): min=60, max=161, avg=82.86, stdev=16.08 00:10:43.350 lat (usec): min=69, max=176, avg=91.67, stdev=16.20 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 67], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:10:43.350 | 30.00th=[ 74], 40.00th=[ 76], 50.00th=[ 77], 60.00th=[ 79], 00:10:43.350 | 70.00th=[ 83], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 117], 00:10:43.350 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 155], 99.95th=[ 159], 00:10:43.350 | 99.99th=[ 163] 00:10:43.350 bw ( KiB/s): min=20480, max=20480, per=25.32%, avg=20480.00, stdev= 0.00, samples=1 00:10:43.350 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:43.350 lat (usec) : 100=82.15%, 250=17.85% 00:10:43.350 cpu : usr=6.40%, sys=10.70%, ctx=10571, majf=0, minf=1 00:10:43.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 issued rwts: total=5120,5450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.350 job2: (groupid=0, jobs=1): err= 0: pid=2465170: Thu Jul 25 09:59:28 2024 00:10:43.350 read: IOPS=4461, BW=17.4MiB/s (18.3MB/s)(17.4MiB/1001msec) 00:10:43.350 slat (nsec): min=6191, max=36864, avg=8406.36, stdev=2484.80 00:10:43.350 clat (usec): min=69, max=203, avg=101.97, stdev=21.28 00:10:43.350 lat (usec): min=78, max=211, avg=110.38, stdev=21.55 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 79], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 87], 00:10:43.350 | 30.00th=[ 89], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 97], 00:10:43.350 | 70.00th=[ 104], 80.00th=[ 126], 90.00th=[ 135], 95.00th=[ 141], 00:10:43.350 | 99.00th=[ 172], 99.50th=[ 178], 99.90th=[ 190], 99.95th=[ 192], 00:10:43.350 | 99.99th=[ 204] 00:10:43.350 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:43.350 slat (nsec): min=8269, max=36857, avg=10547.07, stdev=2568.21 00:10:43.350 clat (usec): min=65, max=181, avg=94.62, stdev=19.13 00:10:43.350 lat (usec): min=80, max=190, avg=105.17, stdev=19.53 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 75], 5.00th=[ 78], 10.00th=[ 79], 20.00th=[ 82], 00:10:43.350 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 90], 00:10:43.350 | 70.00th=[ 94], 80.00th=[ 111], 90.00th=[ 127], 95.00th=[ 133], 00:10:43.350 | 99.00th=[ 163], 99.50th=[ 167], 99.90th=[ 176], 99.95th=[ 180], 00:10:43.350 | 99.99th=[ 182] 00:10:43.350 bw ( KiB/s): min=20480, max=20480, per=25.32%, avg=20480.00, stdev= 0.00, samples=1 00:10:43.350 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:43.350 lat (usec) : 100=71.35%, 250=28.65% 00:10:43.350 cpu : usr=4.70%, sys=10.90%, ctx=9074, majf=0, minf=1 00:10:43.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 issued rwts: total=4466,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.350 job3: (groupid=0, jobs=1): err= 0: pid=2465171: Thu Jul 25 09:59:28 2024 00:10:43.350 read: IOPS=4322, BW=16.9MiB/s (17.7MB/s)(16.9MiB/1001msec) 00:10:43.350 slat (nsec): min=6099, max=26641, avg=7327.44, stdev=1244.45 00:10:43.350 clat (usec): min=75, max=286, avg=105.11, stdev=20.87 00:10:43.350 lat (usec): min=82, max=293, avg=112.44, stdev=21.10 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 81], 5.00th=[ 85], 10.00th=[ 87], 20.00th=[ 90], 00:10:43.350 | 30.00th=[ 92], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 101], 00:10:43.350 | 70.00th=[ 109], 80.00th=[ 127], 90.00th=[ 137], 95.00th=[ 143], 00:10:43.350 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 192], 99.95th=[ 194], 00:10:43.350 | 99.99th=[ 285] 00:10:43.350 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:10:43.350 slat (nsec): min=8215, max=35997, avg=9460.79, stdev=1711.01 00:10:43.350 clat (usec): min=71, max=208, avg=97.86, stdev=18.08 00:10:43.350 lat (usec): min=80, max=217, avg=107.32, stdev=18.76 00:10:43.350 clat percentiles (usec): 00:10:43.350 | 1.00th=[ 78], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 85], 00:10:43.350 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 92], 60.00th=[ 95], 00:10:43.350 | 70.00th=[ 99], 80.00th=[ 112], 90.00th=[ 127], 95.00th=[ 135], 00:10:43.350 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 186], 99.95th=[ 188], 00:10:43.350 | 99.99th=[ 208] 00:10:43.350 bw ( KiB/s): min=20480, max=20480, per=25.32%, avg=20480.00, stdev= 0.00, samples=1 00:10:43.350 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:43.350 lat (usec) : 100=65.84%, 250=34.15%, 500=0.01% 00:10:43.350 cpu : usr=5.30%, sys=9.50%, ctx=8935, majf=0, minf=2 00:10:43.350 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:43.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.350 issued rwts: total=4327,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.350 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:43.350 00:10:43.350 Run status group 0 (all jobs): 00:10:43.350 READ: bw=74.3MiB/s (77.9MB/s), 16.9MiB/s-20.0MiB/s (17.7MB/s-20.9MB/s), io=74.3MiB (78.0MB), run=1001-1001msec 00:10:43.350 WRITE: bw=79.0MiB/s (82.8MB/s), 18.0MiB/s-21.8MiB/s (18.9MB/s-22.8MB/s), io=79.1MiB (82.9MB), run=1001-1001msec 00:10:43.350 00:10:43.350 Disk stats (read/write): 00:10:43.350 nvme0n1: ios=4442/4608, merge=0/0, ticks=382/357, in_queue=739, util=86.97% 00:10:43.350 nvme0n2: ios=4344/4608, merge=0/0, ticks=350/350, in_queue=700, util=87.32% 00:10:43.350 nvme0n3: ios=3984/4096, merge=0/0, ticks=375/340, in_queue=715, util=89.13% 00:10:43.350 nvme0n4: ios=3846/4096, merge=0/0, ticks=369/350, in_queue=719, util=89.69% 00:10:43.350 09:59:28 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:43.350 [global] 00:10:43.350 thread=1 00:10:43.350 invalidate=1 00:10:43.350 rw=randwrite 00:10:43.350 time_based=1 00:10:43.350 runtime=1 00:10:43.350 ioengine=libaio 00:10:43.350 direct=1 00:10:43.350 bs=4096 00:10:43.350 iodepth=1 00:10:43.350 norandommap=0 00:10:43.350 numjobs=1 00:10:43.350 00:10:43.350 verify_dump=1 00:10:43.350 verify_backlog=512 00:10:43.350 verify_state_save=0 00:10:43.350 do_verify=1 00:10:43.350 verify=crc32c-intel 00:10:43.350 [job0] 00:10:43.350 filename=/dev/nvme0n1 00:10:43.350 [job1] 00:10:43.350 filename=/dev/nvme0n2 00:10:43.350 [job2] 00:10:43.350 filename=/dev/nvme0n3 00:10:43.350 [job3] 00:10:43.350 filename=/dev/nvme0n4 00:10:43.350 Could not set queue depth (nvme0n1) 00:10:43.350 Could not set queue depth (nvme0n2) 00:10:43.350 Could not set queue depth (nvme0n3) 00:10:43.350 Could not set queue depth (nvme0n4) 00:10:43.350 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.350 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.350 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.350 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:43.350 fio-3.35 00:10:43.350 Starting 4 threads 00:10:44.737 00:10:44.737 job0: (groupid=0, jobs=1): err= 0: pid=2465551: Thu Jul 25 09:59:29 2024 00:10:44.737 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:44.737 slat (nsec): min=5883, max=25524, avg=6901.36, stdev=887.48 00:10:44.737 clat (usec): min=64, max=220, avg=126.09, stdev=22.76 00:10:44.737 lat (usec): min=70, max=226, avg=132.99, stdev=22.72 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 79], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 108], 00:10:44.737 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 135], 00:10:44.737 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:10:44.737 | 99.00th=[ 182], 99.50th=[ 190], 99.90th=[ 208], 99.95th=[ 212], 00:10:44.737 | 99.99th=[ 221] 00:10:44.737 write: IOPS=3950, BW=15.4MiB/s (16.2MB/s)(15.4MiB/1001msec); 0 zone resets 00:10:44.737 slat (nsec): min=7747, max=37722, avg=8734.43, stdev=1076.92 00:10:44.737 clat (usec): min=60, max=202, avg=119.84, stdev=22.55 00:10:44.737 lat (usec): min=68, max=210, avg=128.57, stdev=22.68 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 73], 5.00th=[ 89], 10.00th=[ 96], 20.00th=[ 101], 00:10:44.737 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 131], 00:10:44.737 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 157], 00:10:44.737 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 194], 00:10:44.737 | 99.99th=[ 202] 00:10:44.737 bw ( KiB/s): min=14152, max=14152, per=22.07%, avg=14152.00, stdev= 0.00, samples=1 00:10:44.737 iops : min= 3538, max= 3538, avg=3538.00, stdev= 0.00, samples=1 00:10:44.737 lat (usec) : 100=12.27%, 250=87.73% 00:10:44.737 cpu : usr=4.20%, sys=8.10%, ctx=7538, majf=0, minf=1 00:10:44.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 issued rwts: total=3584,3954,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.737 job1: (groupid=0, jobs=1): err= 0: pid=2465558: Thu Jul 25 09:59:29 2024 00:10:44.737 read: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec) 00:10:44.737 slat (nsec): min=5871, max=23793, avg=7081.05, stdev=1213.50 00:10:44.737 clat (usec): min=64, max=219, avg=126.47, stdev=22.83 00:10:44.737 lat (usec): min=71, max=225, avg=133.55, stdev=23.05 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 81], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 108], 00:10:44.737 | 30.00th=[ 111], 40.00th=[ 114], 50.00th=[ 118], 60.00th=[ 135], 00:10:44.737 | 70.00th=[ 145], 80.00th=[ 149], 90.00th=[ 157], 95.00th=[ 161], 00:10:44.737 | 99.00th=[ 190], 99.50th=[ 198], 99.90th=[ 215], 99.95th=[ 217], 00:10:44.737 | 99.99th=[ 221] 00:10:44.737 write: IOPS=3917, BW=15.3MiB/s (16.0MB/s)(15.3MiB/1001msec); 0 zone resets 00:10:44.737 slat (nsec): min=7581, max=31663, avg=9046.44, stdev=1523.62 00:10:44.737 clat (usec): min=62, max=202, avg=119.92, stdev=22.42 00:10:44.737 lat (usec): min=70, max=214, avg=128.96, stdev=22.86 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 73], 5.00th=[ 90], 10.00th=[ 96], 20.00th=[ 101], 00:10:44.737 | 30.00th=[ 104], 40.00th=[ 109], 50.00th=[ 114], 60.00th=[ 131], 00:10:44.737 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 147], 95.00th=[ 157], 00:10:44.737 | 99.00th=[ 178], 99.50th=[ 184], 99.90th=[ 194], 99.95th=[ 200], 00:10:44.737 | 99.99th=[ 202] 00:10:44.737 bw ( KiB/s): min=13920, max=13920, per=21.71%, avg=13920.00, stdev= 0.00, samples=1 00:10:44.737 iops : min= 3480, max= 3480, avg=3480.00, stdev= 0.00, samples=1 00:10:44.737 lat (usec) : 100=12.29%, 250=87.71% 00:10:44.737 cpu : usr=5.00%, sys=7.70%, ctx=7505, majf=0, minf=2 00:10:44.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 issued rwts: total=3584,3921,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.737 job2: (groupid=0, jobs=1): err= 0: pid=2465573: Thu Jul 25 09:59:29 2024 00:10:44.737 read: IOPS=3365, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1001msec) 00:10:44.737 slat (nsec): min=6039, max=29893, avg=8837.93, stdev=3167.93 00:10:44.737 clat (usec): min=76, max=210, avg=134.07, stdev=19.05 00:10:44.737 lat (usec): min=83, max=217, avg=142.91, stdev=18.21 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 88], 5.00th=[ 110], 10.00th=[ 115], 20.00th=[ 120], 00:10:44.737 | 30.00th=[ 123], 40.00th=[ 127], 50.00th=[ 133], 60.00th=[ 139], 00:10:44.737 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 157], 95.00th=[ 165], 00:10:44.737 | 99.00th=[ 188], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 204], 00:10:44.737 | 99.99th=[ 210] 00:10:44.737 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:10:44.737 slat (nsec): min=7855, max=46100, avg=10932.15, stdev=3383.27 00:10:44.737 clat (usec): min=72, max=196, avg=128.82, stdev=17.44 00:10:44.737 lat (usec): min=81, max=221, avg=139.75, stdev=16.94 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 83], 5.00th=[ 102], 10.00th=[ 112], 20.00th=[ 117], 00:10:44.737 | 30.00th=[ 120], 40.00th=[ 124], 50.00th=[ 129], 60.00th=[ 133], 00:10:44.737 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 151], 95.00th=[ 161], 00:10:44.737 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 190], 99.95th=[ 192], 00:10:44.737 | 99.99th=[ 198] 00:10:44.737 bw ( KiB/s): min=13928, max=13928, per=21.72%, avg=13928.00, stdev= 0.00, samples=1 00:10:44.737 iops : min= 3482, max= 3482, avg=3482.00, stdev= 0.00, samples=1 00:10:44.737 lat (usec) : 100=3.87%, 250=96.13% 00:10:44.737 cpu : usr=5.00%, sys=7.80%, ctx=6953, majf=0, minf=1 00:10:44.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 issued rwts: total=3369,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.737 job3: (groupid=0, jobs=1): err= 0: pid=2465578: Thu Jul 25 09:59:29 2024 00:10:44.737 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:10:44.737 slat (nsec): min=6199, max=54934, avg=7629.13, stdev=2039.39 00:10:44.737 clat (usec): min=65, max=196, avg=101.98, stdev=22.15 00:10:44.737 lat (usec): min=78, max=203, avg=109.61, stdev=22.33 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 76], 5.00th=[ 79], 10.00th=[ 81], 20.00th=[ 83], 00:10:44.737 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 90], 60.00th=[ 108], 00:10:44.737 | 70.00th=[ 121], 80.00th=[ 126], 90.00th=[ 131], 95.00th=[ 137], 00:10:44.737 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 178], 00:10:44.737 | 99.99th=[ 196] 00:10:44.737 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(17.9MiB/1001msec); 0 zone resets 00:10:44.737 slat (nsec): min=8351, max=34419, avg=9798.85, stdev=1242.79 00:10:44.737 clat (usec): min=68, max=189, avg=104.97, stdev=24.15 00:10:44.737 lat (usec): min=76, max=198, avg=114.77, stdev=24.37 00:10:44.737 clat percentiles (usec): 00:10:44.737 | 1.00th=[ 74], 5.00th=[ 76], 10.00th=[ 78], 20.00th=[ 81], 00:10:44.737 | 30.00th=[ 84], 40.00th=[ 87], 50.00th=[ 113], 60.00th=[ 119], 00:10:44.737 | 70.00th=[ 122], 80.00th=[ 126], 90.00th=[ 133], 95.00th=[ 143], 00:10:44.737 | 99.00th=[ 163], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 186], 00:10:44.737 | 99.99th=[ 190] 00:10:44.737 bw ( KiB/s): min=20480, max=20480, per=31.94%, avg=20480.00, stdev= 0.00, samples=1 00:10:44.737 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:10:44.737 lat (usec) : 100=53.16%, 250=46.84% 00:10:44.737 cpu : usr=5.80%, sys=10.00%, ctx=8681, majf=0, minf=1 00:10:44.737 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:44.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.737 issued rwts: total=4096,4585,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.737 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:44.737 00:10:44.737 Run status group 0 (all jobs): 00:10:44.737 READ: bw=57.1MiB/s (59.9MB/s), 13.1MiB/s-16.0MiB/s (13.8MB/s-16.8MB/s), io=57.2MiB (59.9MB), run=1001-1001msec 00:10:44.737 WRITE: bw=62.6MiB/s (65.7MB/s), 14.0MiB/s-17.9MiB/s (14.7MB/s-18.8MB/s), io=62.7MiB (65.7MB), run=1001-1001msec 00:10:44.737 00:10:44.737 Disk stats (read/write): 00:10:44.737 nvme0n1: ios=3121/3260, merge=0/0, ticks=376/369, in_queue=745, util=86.87% 00:10:44.737 nvme0n2: ios=3072/3226, merge=0/0, ticks=372/375, in_queue=747, util=87.23% 00:10:44.737 nvme0n3: ios=2844/3072, merge=0/0, ticks=366/366, in_queue=732, util=89.22% 00:10:44.737 nvme0n4: ios=3584/4058, merge=0/0, ticks=332/382, in_queue=714, util=89.78% 00:10:44.738 09:59:29 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:44.738 [global] 00:10:44.738 thread=1 00:10:44.738 invalidate=1 00:10:44.738 rw=write 00:10:44.738 time_based=1 00:10:44.738 runtime=1 00:10:44.738 ioengine=libaio 00:10:44.738 direct=1 00:10:44.738 bs=4096 00:10:44.738 iodepth=128 00:10:44.738 norandommap=0 00:10:44.738 numjobs=1 00:10:44.738 00:10:44.738 verify_dump=1 00:10:44.738 verify_backlog=512 00:10:44.738 verify_state_save=0 00:10:44.738 do_verify=1 00:10:44.738 verify=crc32c-intel 00:10:44.738 [job0] 00:10:44.738 filename=/dev/nvme0n1 00:10:44.738 [job1] 00:10:44.738 filename=/dev/nvme0n2 00:10:44.738 [job2] 00:10:44.738 filename=/dev/nvme0n3 00:10:44.738 [job3] 00:10:44.738 filename=/dev/nvme0n4 00:10:44.738 Could not set queue depth (nvme0n1) 00:10:44.738 Could not set queue depth (nvme0n2) 00:10:44.738 Could not set queue depth (nvme0n3) 00:10:44.738 Could not set queue depth (nvme0n4) 00:10:45.002 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.002 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.002 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.002 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:45.002 fio-3.35 00:10:45.002 Starting 4 threads 00:10:46.375 00:10:46.375 job0: (groupid=0, jobs=1): err= 0: pid=2466003: Thu Jul 25 09:59:31 2024 00:10:46.375 read: IOPS=4257, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec) 00:10:46.375 slat (nsec): min=1447, max=1875.9k, avg=113720.95, stdev=294888.31 00:10:46.375 clat (usec): min=3894, max=17918, avg=14647.31, stdev=1002.93 00:10:46.375 lat (usec): min=4910, max=19004, avg=14761.03, stdev=988.27 00:10:46.375 clat percentiles (usec): 00:10:46.375 | 1.00th=[ 8586], 5.00th=[13698], 10.00th=[14091], 20.00th=[14484], 00:10:46.375 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:10:46.375 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15270], 95.00th=[15401], 00:10:46.375 | 99.00th=[15795], 99.50th=[16057], 99.90th=[17957], 99.95th=[17957], 00:10:46.375 | 99.99th=[17957] 00:10:46.375 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:46.375 slat (usec): min=2, max=1773, avg=108.55, stdev=282.31 00:10:46.375 clat (usec): min=8096, max=15674, avg=14013.88, stdev=626.34 00:10:46.375 lat (usec): min=8104, max=15817, avg=14122.43, stdev=608.05 00:10:46.375 clat percentiles (usec): 00:10:46.375 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13304], 20.00th=[13829], 00:10:46.375 | 30.00th=[13960], 40.00th=[13960], 50.00th=[14091], 60.00th=[14091], 00:10:46.375 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14615], 95.00th=[14746], 00:10:46.375 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15401], 99.95th=[15533], 00:10:46.375 | 99.99th=[15664] 00:10:46.375 bw ( KiB/s): min=18152, max=18712, per=17.02%, avg=18432.00, stdev=395.98, samples=2 00:10:46.375 iops : min= 4538, max= 4678, avg=4608.00, stdev=98.99, samples=2 00:10:46.375 lat (msec) : 4=0.01%, 10=0.91%, 20=99.08% 00:10:46.375 cpu : usr=1.99%, sys=3.78%, ctx=1315, majf=0, minf=1 00:10:46.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.375 issued rwts: total=4279,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.375 job1: (groupid=0, jobs=1): err= 0: pid=2466017: Thu Jul 25 09:59:31 2024 00:10:46.375 read: IOPS=9698, BW=37.9MiB/s (39.7MB/s)(38.0MiB/1003msec) 00:10:46.375 slat (nsec): min=1379, max=985504, avg=50947.51, stdev=186323.32 00:10:46.375 clat (usec): min=5372, max=8801, avg=6706.46, stdev=275.71 00:10:46.375 lat (usec): min=5556, max=8803, avg=6757.41, stdev=257.45 00:10:46.375 clat percentiles (usec): 00:10:46.375 | 1.00th=[ 5800], 5.00th=[ 6128], 10.00th=[ 6390], 20.00th=[ 6521], 00:10:46.375 | 30.00th=[ 6652], 40.00th=[ 6652], 50.00th=[ 6718], 60.00th=[ 6783], 00:10:46.375 | 70.00th=[ 6849], 80.00th=[ 6915], 90.00th=[ 6980], 95.00th=[ 7046], 00:10:46.375 | 99.00th=[ 7177], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8848], 00:10:46.375 | 99.99th=[ 8848] 00:10:46.375 write: IOPS=9764, BW=38.1MiB/s (40.0MB/s)(38.3MiB/1003msec); 0 zone resets 00:10:46.375 slat (nsec): min=1893, max=1229.9k, avg=48627.64, stdev=175256.49 00:10:46.375 clat (usec): min=2096, max=7635, avg=6312.35, stdev=340.73 00:10:46.375 lat (usec): min=2099, max=7711, avg=6360.98, stdev=328.62 00:10:46.375 clat percentiles (usec): 00:10:46.375 | 1.00th=[ 5342], 5.00th=[ 5669], 10.00th=[ 6063], 20.00th=[ 6194], 00:10:46.375 | 30.00th=[ 6259], 40.00th=[ 6325], 50.00th=[ 6390], 60.00th=[ 6390], 00:10:46.375 | 70.00th=[ 6456], 80.00th=[ 6521], 90.00th=[ 6587], 95.00th=[ 6652], 00:10:46.375 | 99.00th=[ 6849], 99.50th=[ 6915], 99.90th=[ 7111], 99.95th=[ 7111], 00:10:46.375 | 99.99th=[ 7635] 00:10:46.375 bw ( KiB/s): min=37120, max=40704, per=35.94%, avg=38912.00, stdev=2534.27, samples=2 00:10:46.375 iops : min= 9280, max=10176, avg=9728.00, stdev=633.57, samples=2 00:10:46.375 lat (msec) : 4=0.17%, 10=99.83% 00:10:46.375 cpu : usr=4.29%, sys=6.19%, ctx=1258, majf=0, minf=1 00:10:46.375 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:46.375 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.375 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.375 issued rwts: total=9728,9794,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.375 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.375 job2: (groupid=0, jobs=1): err= 0: pid=2466034: Thu Jul 25 09:59:31 2024 00:10:46.375 read: IOPS=7861, BW=30.7MiB/s (32.2MB/s)(30.8MiB/1002msec) 00:10:46.375 slat (nsec): min=1340, max=1541.4k, avg=62547.42, stdev=234648.12 00:10:46.376 clat (usec): min=614, max=9582, avg=8075.29, stdev=610.99 00:10:46.376 lat (usec): min=1535, max=9591, avg=8137.83, stdev=603.72 00:10:46.376 clat percentiles (usec): 00:10:46.376 | 1.00th=[ 5604], 5.00th=[ 7439], 10.00th=[ 7701], 20.00th=[ 7832], 00:10:46.376 | 30.00th=[ 7963], 40.00th=[ 8029], 50.00th=[ 8094], 60.00th=[ 8225], 00:10:46.376 | 70.00th=[ 8291], 80.00th=[ 8455], 90.00th=[ 8586], 95.00th=[ 8717], 00:10:46.376 | 99.00th=[ 9110], 99.50th=[ 9110], 99.90th=[ 9372], 99.95th=[ 9372], 00:10:46.376 | 99.99th=[ 9634] 00:10:46.376 write: IOPS=8175, BW=31.9MiB/s (33.5MB/s)(32.0MiB/1002msec); 0 zone resets 00:10:46.376 slat (nsec): min=1877, max=2131.9k, avg=59205.80, stdev=218764.92 00:10:46.376 clat (usec): min=5975, max=9116, avg=7711.82, stdev=374.61 00:10:46.376 lat (usec): min=5983, max=9119, avg=7771.02, stdev=367.71 00:10:46.376 clat percentiles (usec): 00:10:46.376 | 1.00th=[ 6718], 5.00th=[ 7111], 10.00th=[ 7308], 20.00th=[ 7439], 00:10:46.376 | 30.00th=[ 7504], 40.00th=[ 7635], 50.00th=[ 7701], 60.00th=[ 7832], 00:10:46.376 | 70.00th=[ 7898], 80.00th=[ 7963], 90.00th=[ 8160], 95.00th=[ 8291], 00:10:46.376 | 99.00th=[ 8717], 99.50th=[ 8848], 99.90th=[ 9110], 99.95th=[ 9110], 00:10:46.376 | 99.99th=[ 9110] 00:10:46.376 bw ( KiB/s): min=32768, max=32768, per=30.27%, avg=32768.00, stdev= 0.00, samples=2 00:10:46.376 iops : min= 8192, max= 8192, avg=8192.00, stdev= 0.00, samples=2 00:10:46.376 lat (usec) : 750=0.01% 00:10:46.376 lat (msec) : 2=0.11%, 4=0.19%, 10=99.70% 00:10:46.376 cpu : usr=3.40%, sys=5.39%, ctx=1067, majf=0, minf=1 00:10:46.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:10:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.376 issued rwts: total=7877,8192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.376 job3: (groupid=0, jobs=1): err= 0: pid=2466036: Thu Jul 25 09:59:31 2024 00:10:46.376 read: IOPS=4243, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1005msec) 00:10:46.376 slat (nsec): min=1456, max=2008.5k, avg=114802.71, stdev=299136.15 00:10:46.376 clat (usec): min=3890, max=18995, avg=14679.28, stdev=973.82 00:10:46.376 lat (usec): min=4896, max=18999, avg=14794.08, stdev=954.59 00:10:46.376 clat percentiles (usec): 00:10:46.376 | 1.00th=[10290], 5.00th=[13829], 10.00th=[14091], 20.00th=[14353], 00:10:46.376 | 30.00th=[14615], 40.00th=[14746], 50.00th=[14746], 60.00th=[14877], 00:10:46.376 | 70.00th=[15008], 80.00th=[15139], 90.00th=[15270], 95.00th=[15401], 00:10:46.376 | 99.00th=[16057], 99.50th=[17957], 99.90th=[19006], 99.95th=[19006], 00:10:46.376 | 99.99th=[19006] 00:10:46.376 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:46.376 slat (usec): min=2, max=1803, avg=107.90, stdev=282.02 00:10:46.376 clat (usec): min=8093, max=16362, avg=14026.14, stdev=607.51 00:10:46.376 lat (usec): min=8101, max=16366, avg=14134.05, stdev=588.78 00:10:46.376 clat percentiles (usec): 00:10:46.376 | 1.00th=[12256], 5.00th=[13173], 10.00th=[13304], 20.00th=[13829], 00:10:46.376 | 30.00th=[13960], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:10:46.376 | 70.00th=[14222], 80.00th=[14484], 90.00th=[14615], 95.00th=[14746], 00:10:46.376 | 99.00th=[15008], 99.50th=[15008], 99.90th=[15401], 99.95th=[15664], 00:10:46.376 | 99.99th=[16319] 00:10:46.376 bw ( KiB/s): min=18216, max=18648, per=17.02%, avg=18432.00, stdev=305.47, samples=2 00:10:46.376 iops : min= 4554, max= 4662, avg=4608.00, stdev=76.37, samples=2 00:10:46.376 lat (msec) : 4=0.01%, 10=0.69%, 20=99.30% 00:10:46.376 cpu : usr=2.09%, sys=3.78%, ctx=1316, majf=0, minf=1 00:10:46.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:46.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:46.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:46.376 issued rwts: total=4265,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:46.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:46.376 00:10:46.376 Run status group 0 (all jobs): 00:10:46.376 READ: bw=102MiB/s (107MB/s), 16.6MiB/s-37.9MiB/s (17.4MB/s-39.7MB/s), io=102MiB (107MB), run=1002-1005msec 00:10:46.376 WRITE: bw=106MiB/s (111MB/s), 17.9MiB/s-38.1MiB/s (18.8MB/s-40.0MB/s), io=106MiB (111MB), run=1002-1005msec 00:10:46.376 00:10:46.376 Disk stats (read/write): 00:10:46.376 nvme0n1: ios=3633/3975, merge=0/0, ticks=26152/27257, in_queue=53409, util=86.77% 00:10:46.376 nvme0n2: ios=8192/8545, merge=0/0, ticks=26748/26261, in_queue=53009, util=87.03% 00:10:46.376 nvme0n3: ios=6656/7092, merge=0/0, ticks=15902/15807, in_queue=31709, util=89.02% 00:10:46.376 nvme0n4: ios=3584/3979, merge=0/0, ticks=26130/27338, in_queue=53468, util=89.67% 00:10:46.376 09:59:31 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:46.376 [global] 00:10:46.376 thread=1 00:10:46.376 invalidate=1 00:10:46.376 rw=randwrite 00:10:46.376 time_based=1 00:10:46.376 runtime=1 00:10:46.376 ioengine=libaio 00:10:46.376 direct=1 00:10:46.376 bs=4096 00:10:46.376 iodepth=128 00:10:46.376 norandommap=0 00:10:46.376 numjobs=1 00:10:46.376 00:10:46.376 verify_dump=1 00:10:46.376 verify_backlog=512 00:10:46.376 verify_state_save=0 00:10:46.376 do_verify=1 00:10:46.376 verify=crc32c-intel 00:10:46.376 [job0] 00:10:46.376 filename=/dev/nvme0n1 00:10:46.376 [job1] 00:10:46.376 filename=/dev/nvme0n2 00:10:46.376 [job2] 00:10:46.376 filename=/dev/nvme0n3 00:10:46.376 [job3] 00:10:46.376 filename=/dev/nvme0n4 00:10:46.376 Could not set queue depth (nvme0n1) 00:10:46.376 Could not set queue depth (nvme0n2) 00:10:46.376 Could not set queue depth (nvme0n3) 00:10:46.376 Could not set queue depth (nvme0n4) 00:10:46.376 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.376 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.376 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.376 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:46.376 fio-3.35 00:10:46.376 Starting 4 threads 00:10:47.751 00:10:47.751 job0: (groupid=0, jobs=1): err= 0: pid=2466466: Thu Jul 25 09:59:32 2024 00:10:47.751 read: IOPS=5679, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec) 00:10:47.751 slat (nsec): min=1415, max=963465, avg=85287.49, stdev=217245.88 00:10:47.751 clat (usec): min=2011, max=13482, avg=10948.03, stdev=742.69 00:10:47.751 lat (usec): min=2835, max=13485, avg=11033.31, stdev=711.03 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 8586], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:10:47.751 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11076], 00:10:47.751 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11207], 95.00th=[11338], 00:10:47.751 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:10:47.751 | 99.99th=[13435] 00:10:47.751 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:47.751 slat (nsec): min=1952, max=948271, avg=80600.83, stdev=204800.30 00:10:47.751 clat (usec): min=8200, max=14569, avg=10499.89, stdev=738.09 00:10:47.751 lat (usec): min=8211, max=14572, avg=10580.49, stdev=716.03 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10159], 00:10:47.751 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:10:47.751 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10945], 95.00th=[12780], 00:10:47.751 | 99.00th=[13042], 99.50th=[13042], 99.90th=[14484], 99.95th=[14615], 00:10:47.751 | 99.99th=[14615] 00:10:47.751 bw ( KiB/s): min=24080, max=24576, per=27.08%, avg=24328.00, stdev=350.72, samples=2 00:10:47.751 iops : min= 6020, max= 6144, avg=6082.00, stdev=87.68, samples=2 00:10:47.751 lat (msec) : 4=0.12%, 10=7.59%, 20=92.29% 00:10:47.751 cpu : usr=2.69%, sys=5.19%, ctx=1839, majf=0, minf=1 00:10:47.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:47.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.751 issued rwts: total=5697,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.751 job1: (groupid=0, jobs=1): err= 0: pid=2466479: Thu Jul 25 09:59:32 2024 00:10:47.751 read: IOPS=5679, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1003msec) 00:10:47.751 slat (nsec): min=1515, max=1367.8k, avg=85321.91, stdev=217878.03 00:10:47.751 clat (usec): min=2004, max=13483, avg=10943.12, stdev=788.83 00:10:47.751 lat (usec): min=2802, max=13486, avg=11028.45, stdev=760.07 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 7832], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:10:47.751 | 30.00th=[10945], 40.00th=[10945], 50.00th=[11076], 60.00th=[11076], 00:10:47.751 | 70.00th=[11207], 80.00th=[11207], 90.00th=[11207], 95.00th=[11338], 00:10:47.751 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13435], 99.95th=[13435], 00:10:47.751 | 99.99th=[13435] 00:10:47.751 write: IOPS=6125, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1003msec); 0 zone resets 00:10:47.751 slat (nsec): min=1927, max=1107.5k, avg=80582.61, stdev=204698.26 00:10:47.751 clat (usec): min=8193, max=13815, avg=10498.04, stdev=725.30 00:10:47.751 lat (usec): min=8228, max=14562, avg=10578.62, stdev=703.10 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[ 9896], 20.00th=[10028], 00:10:47.751 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10421], 00:10:47.751 | 70.00th=[10552], 80.00th=[10552], 90.00th=[10945], 95.00th=[12780], 00:10:47.751 | 99.00th=[13042], 99.50th=[13042], 99.90th=[13829], 99.95th=[13829], 00:10:47.751 | 99.99th=[13829] 00:10:47.751 bw ( KiB/s): min=24080, max=24576, per=27.08%, avg=24328.00, stdev=350.72, samples=2 00:10:47.751 iops : min= 6020, max= 6144, avg=6082.00, stdev=87.68, samples=2 00:10:47.751 lat (msec) : 4=0.19%, 10=7.60%, 20=92.21% 00:10:47.751 cpu : usr=3.29%, sys=4.69%, ctx=1840, majf=0, minf=1 00:10:47.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:10:47.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.751 issued rwts: total=5697,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.751 job2: (groupid=0, jobs=1): err= 0: pid=2466495: Thu Jul 25 09:59:32 2024 00:10:47.751 read: IOPS=4901, BW=19.1MiB/s (20.1MB/s)(19.2MiB/1002msec) 00:10:47.751 slat (nsec): min=1488, max=1677.9k, avg=101316.61, stdev=264283.56 00:10:47.751 clat (usec): min=1409, max=14918, avg=12921.84, stdev=1841.51 00:10:47.751 lat (usec): min=2335, max=14941, avg=13023.16, stdev=1848.66 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 4359], 5.00th=[ 8291], 10.00th=[11338], 20.00th=[12911], 00:10:47.751 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13566], 60.00th=[13566], 00:10:47.751 | 70.00th=[13698], 80.00th=[13829], 90.00th=[13829], 95.00th=[14091], 00:10:47.751 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14746], 99.95th=[14877], 00:10:47.751 | 99.99th=[14877] 00:10:47.751 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:47.751 slat (nsec): min=1981, max=1495.7k, avg=94892.20, stdev=248889.46 00:10:47.751 clat (usec): min=6940, max=14589, avg=12363.32, stdev=1488.50 00:10:47.751 lat (usec): min=7851, max=14596, avg=12458.21, stdev=1497.29 00:10:47.751 clat percentiles (usec): 00:10:47.751 | 1.00th=[ 7898], 5.00th=[ 8029], 10.00th=[ 8848], 20.00th=[12256], 00:10:47.751 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:10:47.751 | 70.00th=[13042], 80.00th=[13042], 90.00th=[13304], 95.00th=[13566], 00:10:47.751 | 99.00th=[13829], 99.50th=[13960], 99.90th=[14222], 99.95th=[14222], 00:10:47.751 | 99.99th=[14615] 00:10:47.751 bw ( KiB/s): min=20480, max=20480, per=22.80%, avg=20480.00, stdev= 0.00, samples=2 00:10:47.751 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:47.751 lat (msec) : 2=0.01%, 4=0.35%, 10=9.50%, 20=90.14% 00:10:47.751 cpu : usr=2.50%, sys=4.60%, ctx=1521, majf=0, minf=1 00:10:47.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:47.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.751 issued rwts: total=4911,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.751 job3: (groupid=0, jobs=1): err= 0: pid=2466500: Thu Jul 25 09:59:32 2024 00:10:47.751 read: IOPS=5004, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1002msec) 00:10:47.751 slat (nsec): min=1480, max=1274.7k, avg=99555.29, stdev=260529.59 00:10:47.751 clat (usec): min=1449, max=14910, avg=12728.73, stdev=2067.18 00:10:47.751 lat (usec): min=1835, max=14927, avg=12828.28, stdev=2078.28 00:10:47.751 clat percentiles (usec): 00:10:47.752 | 1.00th=[ 4424], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[12780], 00:10:47.752 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13566], 00:10:47.752 | 70.00th=[13698], 80.00th=[13698], 90.00th=[13829], 95.00th=[13960], 00:10:47.752 | 99.00th=[14484], 99.50th=[14615], 99.90th=[14877], 99.95th=[14877], 00:10:47.752 | 99.99th=[14877] 00:10:47.752 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:10:47.752 slat (nsec): min=1963, max=1681.0k, avg=94547.22, stdev=247792.64 00:10:47.752 clat (usec): min=6829, max=14261, avg=12284.28, stdev=1572.61 00:10:47.752 lat (usec): min=6832, max=14369, avg=12378.83, stdev=1582.89 00:10:47.752 clat percentiles (usec): 00:10:47.752 | 1.00th=[ 6980], 5.00th=[ 7635], 10.00th=[ 9241], 20.00th=[12125], 00:10:47.752 | 30.00th=[12518], 40.00th=[12780], 50.00th=[12780], 60.00th=[12911], 00:10:47.752 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13435], 00:10:47.752 | 99.00th=[13829], 99.50th=[13829], 99.90th=[14091], 99.95th=[14222], 00:10:47.752 | 99.99th=[14222] 00:10:47.752 bw ( KiB/s): min=20480, max=20480, per=22.80%, avg=20480.00, stdev= 0.00, samples=2 00:10:47.752 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:10:47.752 lat (msec) : 2=0.27%, 4=0.12%, 10=10.78%, 20=88.83% 00:10:47.752 cpu : usr=2.60%, sys=4.30%, ctx=1529, majf=0, minf=1 00:10:47.752 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:47.752 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.752 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:47.752 issued rwts: total=5015,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.752 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:47.752 00:10:47.752 Run status group 0 (all jobs): 00:10:47.752 READ: bw=83.0MiB/s (87.1MB/s), 19.1MiB/s-22.2MiB/s (20.1MB/s-23.3MB/s), io=83.3MiB (87.3MB), run=1002-1003msec 00:10:47.752 WRITE: bw=87.7MiB/s (92.0MB/s), 20.0MiB/s-23.9MiB/s (20.9MB/s-25.1MB/s), io=88.0MiB (92.3MB), run=1002-1003msec 00:10:47.752 00:10:47.752 Disk stats (read/write): 00:10:47.752 nvme0n1: ios=5170/5176, merge=0/0, ticks=14061/13120, in_queue=27181, util=87.17% 00:10:47.752 nvme0n2: ios=5120/5170, merge=0/0, ticks=14037/13147, in_queue=27184, util=87.34% 00:10:47.752 nvme0n3: ios=4096/4213, merge=0/0, ticks=18116/17576, in_queue=35692, util=89.25% 00:10:47.752 nvme0n4: ios=4096/4235, merge=0/0, ticks=18094/17613, in_queue=35707, util=89.70% 00:10:47.752 09:59:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:47.752 09:59:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2466594 00:10:47.752 09:59:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:47.752 09:59:32 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:47.752 [global] 00:10:47.752 thread=1 00:10:47.752 invalidate=1 00:10:47.752 rw=read 00:10:47.752 time_based=1 00:10:47.752 runtime=10 00:10:47.752 ioengine=libaio 00:10:47.752 direct=1 00:10:47.752 bs=4096 00:10:47.752 iodepth=1 00:10:47.752 norandommap=1 00:10:47.752 numjobs=1 00:10:47.752 00:10:47.752 [job0] 00:10:47.752 filename=/dev/nvme0n1 00:10:47.752 [job1] 00:10:47.752 filename=/dev/nvme0n2 00:10:47.752 [job2] 00:10:47.752 filename=/dev/nvme0n3 00:10:47.752 [job3] 00:10:47.752 filename=/dev/nvme0n4 00:10:47.752 Could not set queue depth (nvme0n1) 00:10:47.752 Could not set queue depth (nvme0n2) 00:10:47.752 Could not set queue depth (nvme0n3) 00:10:47.752 Could not set queue depth (nvme0n4) 00:10:48.010 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.010 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.010 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.010 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.010 fio-3.35 00:10:48.010 Starting 4 threads 00:10:51.294 09:59:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:51.294 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=72507392, buflen=4096 00:10:51.294 fio: pid=2466882, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.294 09:59:35 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:51.294 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=108732416, buflen=4096 00:10:51.295 fio: pid=2466881, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.295 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.295 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:51.295 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=17768448, buflen=4096 00:10:51.295 fio: pid=2466879, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.295 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.295 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:51.553 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=37216256, buflen=4096 00:10:51.553 fio: pid=2466880, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:51.553 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.553 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:51.553 00:10:51.553 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2466879: Thu Jul 25 09:59:36 2024 00:10:51.553 read: IOPS=6758, BW=26.4MiB/s (27.7MB/s)(80.9MiB/3066msec) 00:10:51.553 slat (usec): min=4, max=20822, avg=10.12, stdev=196.50 00:10:51.553 clat (usec): min=48, max=20634, avg=135.46, stdev=203.38 00:10:51.553 lat (usec): min=54, max=20930, avg=145.57, stdev=282.67 00:10:51.553 clat percentiles (usec): 00:10:51.553 | 1.00th=[ 57], 5.00th=[ 76], 10.00th=[ 83], 20.00th=[ 117], 00:10:51.554 | 30.00th=[ 124], 40.00th=[ 128], 50.00th=[ 133], 60.00th=[ 145], 00:10:51.554 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 165], 95.00th=[ 176], 00:10:51.554 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 221], 99.95th=[ 223], 00:10:51.554 | 99.99th=[ 249] 00:10:51.554 bw ( KiB/s): min=24112, max=29192, per=23.84%, avg=26472.00, stdev=2131.00, samples=5 00:10:51.554 iops : min= 6028, max= 7298, avg=6618.00, stdev=532.75, samples=5 00:10:51.554 lat (usec) : 50=0.04%, 100=15.09%, 250=84.85% 00:10:51.554 lat (msec) : 50=0.01% 00:10:51.554 cpu : usr=2.48%, sys=7.08%, ctx=20728, majf=0, minf=1 00:10:51.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 issued rwts: total=20723,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.554 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2466880: Thu Jul 25 09:59:36 2024 00:10:51.554 read: IOPS=7817, BW=30.5MiB/s (32.0MB/s)(99.5MiB/3258msec) 00:10:51.554 slat (usec): min=3, max=15829, avg= 9.72, stdev=170.29 00:10:51.554 clat (usec): min=40, max=331, avg=116.66, stdev=40.33 00:10:51.554 lat (usec): min=55, max=15899, avg=126.39, stdev=174.52 00:10:51.554 clat percentiles (usec): 00:10:51.554 | 1.00th=[ 53], 5.00th=[ 56], 10.00th=[ 59], 20.00th=[ 74], 00:10:51.554 | 30.00th=[ 80], 40.00th=[ 119], 50.00th=[ 126], 60.00th=[ 133], 00:10:51.554 | 70.00th=[ 147], 80.00th=[ 155], 90.00th=[ 163], 95.00th=[ 169], 00:10:51.554 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 219], 99.95th=[ 221], 00:10:51.554 | 99.99th=[ 243] 00:10:51.554 bw ( KiB/s): min=24120, max=41254, per=26.75%, avg=29701.00, stdev=6605.86, samples=6 00:10:51.554 iops : min= 6030, max=10313, avg=7425.17, stdev=1651.29, samples=6 00:10:51.554 lat (usec) : 50=0.04%, 100=37.42%, 250=62.53%, 500=0.01% 00:10:51.554 cpu : usr=2.52%, sys=8.54%, ctx=25477, majf=0, minf=1 00:10:51.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 issued rwts: total=25471,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.554 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2466881: Thu Jul 25 09:59:36 2024 00:10:51.554 read: IOPS=9201, BW=35.9MiB/s (37.7MB/s)(104MiB/2885msec) 00:10:51.554 slat (usec): min=4, max=7895, avg= 7.68, stdev=67.74 00:10:51.554 clat (usec): min=64, max=20439, avg=99.15, stdev=126.79 00:10:51.554 lat (usec): min=75, max=20447, avg=106.83, stdev=143.79 00:10:51.554 clat percentiles (usec): 00:10:51.554 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:10:51.554 | 30.00th=[ 84], 40.00th=[ 86], 50.00th=[ 88], 60.00th=[ 91], 00:10:51.554 | 70.00th=[ 116], 80.00th=[ 125], 90.00th=[ 130], 95.00th=[ 135], 00:10:51.554 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 190], 99.95th=[ 206], 00:10:51.554 | 99.99th=[ 239] 00:10:51.554 bw ( KiB/s): min=32360, max=43536, per=34.46%, avg=38265.60, stdev=5175.86, samples=5 00:10:51.554 iops : min= 8090, max=10884, avg=9566.40, stdev=1293.96, samples=5 00:10:51.554 lat (usec) : 100=67.84%, 250=32.15%, 500=0.01% 00:10:51.554 lat (msec) : 50=0.01% 00:10:51.554 cpu : usr=2.67%, sys=10.75%, ctx=26549, majf=0, minf=1 00:10:51.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 issued rwts: total=26547,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.554 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2466882: Thu Jul 25 09:59:36 2024 00:10:51.554 read: IOPS=6566, BW=25.6MiB/s (26.9MB/s)(69.1MiB/2696msec) 00:10:51.554 slat (nsec): min=5313, max=48037, avg=7906.60, stdev=2341.40 00:10:51.554 clat (usec): min=67, max=264, avg=141.76, stdev=23.00 00:10:51.554 lat (usec): min=78, max=270, avg=149.66, stdev=23.15 00:10:51.554 clat percentiles (usec): 00:10:51.554 | 1.00th=[ 89], 5.00th=[ 104], 10.00th=[ 120], 20.00th=[ 125], 00:10:51.554 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 139], 60.00th=[ 149], 00:10:51.554 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 186], 00:10:51.554 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 217], 99.95th=[ 227], 00:10:51.554 | 99.99th=[ 243] 00:10:51.554 bw ( KiB/s): min=24272, max=29288, per=23.80%, avg=26432.00, stdev=2164.65, samples=5 00:10:51.554 iops : min= 6068, max= 7322, avg=6608.00, stdev=541.16, samples=5 00:10:51.554 lat (usec) : 100=4.39%, 250=95.59%, 500=0.01% 00:10:51.554 cpu : usr=2.41%, sys=7.38%, ctx=17703, majf=0, minf=2 00:10:51.554 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:51.554 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.554 issued rwts: total=17703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.554 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:51.554 00:10:51.554 Run status group 0 (all jobs): 00:10:51.554 READ: bw=108MiB/s (114MB/s), 25.6MiB/s-35.9MiB/s (26.9MB/s-37.7MB/s), io=353MiB (370MB), run=2696-3258msec 00:10:51.554 00:10:51.554 Disk stats (read/write): 00:10:51.554 nvme0n1: ios=18968/0, merge=0/0, ticks=2531/0, in_queue=2531, util=94.59% 00:10:51.554 nvme0n2: ios=23400/0, merge=0/0, ticks=2688/0, in_queue=2688, util=94.47% 00:10:51.554 nvme0n3: ios=26542/0, merge=0/0, ticks=2459/0, in_queue=2459, util=96.12% 00:10:51.554 nvme0n4: ios=17317/0, merge=0/0, ticks=2317/0, in_queue=2317, util=96.46% 00:10:51.554 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.554 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:51.812 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:51.812 09:59:36 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:52.071 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.071 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:52.329 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:52.329 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:52.588 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:52.588 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2466594 00:10:52.588 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:52.588 09:59:37 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:53.523 nvmf hotplug test: fio failed as expected 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:10:53.523 rmmod nvme_rdma 00:10:53.523 rmmod nvme_fabrics 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2463806 ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2463806 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2463806 ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2463806 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.523 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2463806 00:10:53.782 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.782 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.782 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2463806' 00:10:53.782 killing process with pid 2463806 00:10:53.782 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2463806 00:10:53.782 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2463806 00:10:54.041 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:54.041 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:10:54.041 00:10:54.041 real 0m25.340s 00:10:54.041 user 1m51.218s 00:10:54.041 sys 0m8.448s 00:10:54.041 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:54.041 09:59:38 nvmf_rdma.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.041 ************************************ 00:10:54.041 END TEST nvmf_fio_target 00:10:54.041 ************************************ 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:54.041 ************************************ 00:10:54.041 START TEST nvmf_bdevio 00:10:54.041 ************************************ 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:10:54.041 * Looking for test storage... 00:10:54.041 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:54.041 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:10:54.042 09:59:39 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:00.652 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:00.652 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:00.652 Found net devices under 0000:da:00.0: mlx_0_0 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:00.652 Found net devices under 0000:da:00.1: mlx_0_1 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@420 -- # rdma_device_init 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # uname 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:00.652 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:00.653 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:00.653 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:00.653 altname enp218s0f0np0 00:11:00.653 altname ens818f0np0 00:11:00.653 inet 192.168.100.8/24 scope global mlx_0_0 00:11:00.653 valid_lft forever preferred_lft forever 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:00.653 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:00.653 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:00.653 altname enp218s0f1np1 00:11:00.653 altname ens818f1np1 00:11:00.653 inet 192.168.100.9/24 scope global mlx_0_1 00:11:00.653 valid_lft forever preferred_lft forever 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # continue 2 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:00.653 192.168.100.9' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:00.653 192.168.100.9' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # head -n 1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:00.653 192.168.100.9' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # tail -n +2 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # head -n 1 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2470903 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2470903 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2470903 ']' 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.653 09:59:44 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.653 [2024-07-25 09:59:44.927020] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:00.653 [2024-07-25 09:59:44.927068] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.653 EAL: No free 2048 kB hugepages reported on node 1 00:11:00.653 [2024-07-25 09:59:44.994454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:00.653 [2024-07-25 09:59:45.067520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:00.653 [2024-07-25 09:59:45.067563] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:00.653 [2024-07-25 09:59:45.067570] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:00.653 [2024-07-25 09:59:45.067576] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:00.653 [2024-07-25 09:59:45.067581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:00.654 [2024-07-25 09:59:45.067700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:00.654 [2024-07-25 09:59:45.067785] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:00.654 [2024-07-25 09:59:45.067871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:00.654 [2024-07-25 09:59:45.067872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.654 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.654 [2024-07-25 09:59:45.799044] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16413d0/0x16458c0) succeed. 00:11:00.654 [2024-07-25 09:59:45.808204] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16429c0/0x1686f50) succeed. 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 Malloc0 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.913 [2024-07-25 09:59:45.971575] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:00.913 { 00:11:00.913 "params": { 00:11:00.913 "name": "Nvme$subsystem", 00:11:00.913 "trtype": "$TEST_TRANSPORT", 00:11:00.913 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:00.913 "adrfam": "ipv4", 00:11:00.913 "trsvcid": "$NVMF_PORT", 00:11:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:00.913 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:00.913 "hdgst": ${hdgst:-false}, 00:11:00.913 "ddgst": ${ddgst:-false} 00:11:00.913 }, 00:11:00.913 "method": "bdev_nvme_attach_controller" 00:11:00.913 } 00:11:00.913 EOF 00:11:00.913 )") 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:00.913 09:59:45 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:00.913 "params": { 00:11:00.913 "name": "Nvme1", 00:11:00.913 "trtype": "rdma", 00:11:00.913 "traddr": "192.168.100.8", 00:11:00.913 "adrfam": "ipv4", 00:11:00.913 "trsvcid": "4420", 00:11:00.913 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:00.913 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:00.913 "hdgst": false, 00:11:00.913 "ddgst": false 00:11:00.913 }, 00:11:00.913 "method": "bdev_nvme_attach_controller" 00:11:00.913 }' 00:11:00.913 [2024-07-25 09:59:46.018653] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:00.913 [2024-07-25 09:59:46.018694] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471155 ] 00:11:00.913 EAL: No free 2048 kB hugepages reported on node 1 00:11:01.172 [2024-07-25 09:59:46.085413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:01.172 [2024-07-25 09:59:46.161112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.172 [2024-07-25 09:59:46.161220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.172 [2024-07-25 09:59:46.161220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:01.431 I/O targets: 00:11:01.431 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:01.431 00:11:01.431 00:11:01.431 CUnit - A unit testing framework for C - Version 2.1-3 00:11:01.431 http://cunit.sourceforge.net/ 00:11:01.431 00:11:01.431 00:11:01.431 Suite: bdevio tests on: Nvme1n1 00:11:01.431 Test: blockdev write read block ...passed 00:11:01.431 Test: blockdev write zeroes read block ...passed 00:11:01.431 Test: blockdev write zeroes read no split ...passed 00:11:01.431 Test: blockdev write zeroes read split ...passed 00:11:01.431 Test: blockdev write zeroes read split partial ...passed 00:11:01.431 Test: blockdev reset ...[2024-07-25 09:59:46.367010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:01.431 [2024-07-25 09:59:46.389875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:11:01.431 [2024-07-25 09:59:46.416321] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:01.431 passed 00:11:01.431 Test: blockdev write read 8 blocks ...passed 00:11:01.431 Test: blockdev write read size > 128k ...passed 00:11:01.431 Test: blockdev write read invalid size ...passed 00:11:01.431 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:01.431 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:01.431 Test: blockdev write read max offset ...passed 00:11:01.431 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:01.431 Test: blockdev writev readv 8 blocks ...passed 00:11:01.431 Test: blockdev writev readv 30 x 1block ...passed 00:11:01.431 Test: blockdev writev readv block ...passed 00:11:01.432 Test: blockdev writev readv size > 128k ...passed 00:11:01.432 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:01.432 Test: blockdev comparev and writev ...[2024-07-25 09:59:46.419613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.419647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.419657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.419665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.419829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.419838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.419847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.419855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.420033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.420051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.420259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:01.432 [2024-07-25 09:59:46.420273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:01.432 passed 00:11:01.432 Test: blockdev nvme passthru rw ...passed 00:11:01.432 Test: blockdev nvme passthru vendor specific ...[2024-07-25 09:59:46.420570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:01.432 [2024-07-25 09:59:46.420581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:01.432 [2024-07-25 09:59:46.420631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:01.432 [2024-07-25 09:59:46.420683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:01.432 [2024-07-25 09:59:46.420721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:11:01.432 [2024-07-25 09:59:46.420729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:01.432 passed 00:11:01.432 Test: blockdev nvme admin passthru ...passed 00:11:01.432 Test: blockdev copy ...passed 00:11:01.432 00:11:01.432 Run Summary: Type Total Ran Passed Failed Inactive 00:11:01.432 suites 1 1 n/a 0 0 00:11:01.432 tests 23 23 23 0 0 00:11:01.432 asserts 152 152 152 0 n/a 00:11:01.432 00:11:01.432 Elapsed time = 0.173 seconds 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:01.691 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:01.692 rmmod nvme_rdma 00:11:01.692 rmmod nvme_fabrics 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2470903 ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2470903 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2470903 ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2470903 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2470903 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2470903' 00:11:01.692 killing process with pid 2470903 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2470903 00:11:01.692 09:59:46 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2470903 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:01.950 00:11:01.950 real 0m7.966s 00:11:01.950 user 0m10.471s 00:11:01.950 sys 0m4.830s 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:01.950 ************************************ 00:11:01.950 END TEST nvmf_bdevio 00:11:01.950 ************************************ 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:01.950 00:11:01.950 real 3m58.479s 00:11:01.950 user 10m39.691s 00:11:01.950 sys 1m19.573s 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:01.950 09:59:47 nvmf_rdma.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:01.950 ************************************ 00:11:01.950 END TEST nvmf_target_core 00:11:01.950 ************************************ 00:11:01.950 09:59:47 nvmf_rdma -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:01.950 09:59:47 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:01.950 09:59:47 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:01.950 09:59:47 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:11:02.210 ************************************ 00:11:02.210 START TEST nvmf_target_extra 00:11:02.210 ************************************ 00:11:02.210 09:59:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=rdma 00:11:02.210 * Looking for test storage... 00:11:02.210 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:11:02.210 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.211 ************************************ 00:11:02.211 START TEST nvmf_example 00:11:02.211 ************************************ 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:11:02.211 * Looking for test storage... 00:11:02.211 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.211 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:02.472 09:59:47 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.745 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:07.746 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:07.746 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:07.746 Found net devices under 0000:da:00.0: mlx_0_0 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:07.746 Found net devices under 0000:da:00.1: mlx_0_1 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@420 -- # rdma_device_init 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # uname 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:07.746 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:08.006 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.006 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:08.006 altname enp218s0f0np0 00:11:08.006 altname ens818f0np0 00:11:08.006 inet 192.168.100.8/24 scope global mlx_0_0 00:11:08.006 valid_lft forever preferred_lft forever 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:08.006 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:08.006 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:08.006 altname enp218s0f1np1 00:11:08.006 altname ens818f1np1 00:11:08.006 inet 192.168.100.9/24 scope global mlx_0_1 00:11:08.006 valid_lft forever preferred_lft forever 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # continue 2 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:08.006 192.168.100.9' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:08.006 192.168.100.9' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # head -n 1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:08.006 192.168.100.9' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # tail -n +2 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # head -n 1 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:08.006 09:59:52 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2474498 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2474498 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2474498 ']' 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.006 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.006 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.941 09:59:53 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.199 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:09.200 09:59:54 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:09.200 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.407 Initializing NVMe Controllers 00:11:21.407 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:11:21.407 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:21.407 Initialization complete. Launching workers. 00:11:21.407 ======================================================== 00:11:21.407 Latency(us) 00:11:21.407 Device Information : IOPS MiB/s Average min max 00:11:21.407 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26122.72 102.04 2449.83 635.29 15824.30 00:11:21.407 ======================================================== 00:11:21.407 Total : 26122.72 102.04 2449.83 635.29 15824.30 00:11:21.407 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:21.407 rmmod nvme_rdma 00:11:21.407 rmmod nvme_fabrics 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2474498 ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2474498 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2474498 ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2474498 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2474498 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2474498' 00:11:21.407 killing process with pid 2474498 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2474498 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2474498 00:11:21.407 nvmf threads initialize successfully 00:11:21.407 bdev subsystem init successfully 00:11:21.407 created a nvmf target service 00:11:21.407 create targets's poll groups done 00:11:21.407 all subsystems of target started 00:11:21.407 nvmf target is running 00:11:21.407 all subsystems of target stopped 00:11:21.407 destroy targets's poll groups done 00:11:21.407 destroyed the nvmf target service 00:11:21.407 bdev subsystem finish successfully 00:11:21.407 nvmf threads destroy successfully 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.407 00:11:21.407 real 0m18.497s 00:11:21.407 user 0m51.730s 00:11:21.407 sys 0m4.676s 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:21.407 ************************************ 00:11:21.407 END TEST nvmf_example 00:11:21.407 ************************************ 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:21.407 ************************************ 00:11:21.407 START TEST nvmf_filesystem 00:11:21.407 ************************************ 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:11:21.407 * Looking for test storage... 00:11:21.407 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:21.407 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:21.408 #define SPDK_CONFIG_H 00:11:21.408 #define SPDK_CONFIG_APPS 1 00:11:21.408 #define SPDK_CONFIG_ARCH native 00:11:21.408 #undef SPDK_CONFIG_ASAN 00:11:21.408 #undef SPDK_CONFIG_AVAHI 00:11:21.408 #undef SPDK_CONFIG_CET 00:11:21.408 #define SPDK_CONFIG_COVERAGE 1 00:11:21.408 #define SPDK_CONFIG_CROSS_PREFIX 00:11:21.408 #undef SPDK_CONFIG_CRYPTO 00:11:21.408 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:21.408 #undef SPDK_CONFIG_CUSTOMOCF 00:11:21.408 #undef SPDK_CONFIG_DAOS 00:11:21.408 #define SPDK_CONFIG_DAOS_DIR 00:11:21.408 #define SPDK_CONFIG_DEBUG 1 00:11:21.408 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:21.408 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build 00:11:21.408 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:21.408 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:21.408 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:21.408 #undef SPDK_CONFIG_DPDK_UADK 00:11:21.408 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:11:21.408 #define SPDK_CONFIG_EXAMPLES 1 00:11:21.408 #undef SPDK_CONFIG_FC 00:11:21.408 #define SPDK_CONFIG_FC_PATH 00:11:21.408 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:21.408 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:21.408 #undef SPDK_CONFIG_FUSE 00:11:21.408 #undef SPDK_CONFIG_FUZZER 00:11:21.408 #define SPDK_CONFIG_FUZZER_LIB 00:11:21.408 #undef SPDK_CONFIG_GOLANG 00:11:21.408 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:21.408 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:21.408 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:21.408 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:21.408 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:21.408 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:21.408 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:21.408 #define SPDK_CONFIG_IDXD 1 00:11:21.408 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:21.408 #undef SPDK_CONFIG_IPSEC_MB 00:11:21.408 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:21.408 #define SPDK_CONFIG_ISAL 1 00:11:21.408 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:21.408 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:21.408 #define SPDK_CONFIG_LIBDIR 00:11:21.408 #undef SPDK_CONFIG_LTO 00:11:21.408 #define SPDK_CONFIG_MAX_LCORES 128 00:11:21.408 #define SPDK_CONFIG_NVME_CUSE 1 00:11:21.408 #undef SPDK_CONFIG_OCF 00:11:21.408 #define SPDK_CONFIG_OCF_PATH 00:11:21.408 #define SPDK_CONFIG_OPENSSL_PATH 00:11:21.408 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:21.408 #define SPDK_CONFIG_PGO_DIR 00:11:21.408 #undef SPDK_CONFIG_PGO_USE 00:11:21.408 #define SPDK_CONFIG_PREFIX /usr/local 00:11:21.408 #undef SPDK_CONFIG_RAID5F 00:11:21.408 #undef SPDK_CONFIG_RBD 00:11:21.408 #define SPDK_CONFIG_RDMA 1 00:11:21.408 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:21.408 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:21.408 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:21.408 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:21.408 #define SPDK_CONFIG_SHARED 1 00:11:21.408 #undef SPDK_CONFIG_SMA 00:11:21.408 #define SPDK_CONFIG_TESTS 1 00:11:21.408 #undef SPDK_CONFIG_TSAN 00:11:21.408 #define SPDK_CONFIG_UBLK 1 00:11:21.408 #define SPDK_CONFIG_UBSAN 1 00:11:21.408 #undef SPDK_CONFIG_UNIT_TESTS 00:11:21.408 #undef SPDK_CONFIG_URING 00:11:21.408 #define SPDK_CONFIG_URING_PATH 00:11:21.408 #undef SPDK_CONFIG_URING_ZNS 00:11:21.408 #undef SPDK_CONFIG_USDT 00:11:21.408 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:21.408 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:21.408 #undef SPDK_CONFIG_VFIO_USER 00:11:21.408 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:21.408 #define SPDK_CONFIG_VHOST 1 00:11:21.408 #define SPDK_CONFIG_VIRTIO 1 00:11:21.408 #undef SPDK_CONFIG_VTUNE 00:11:21.408 #define SPDK_CONFIG_VTUNE_DIR 00:11:21.408 #define SPDK_CONFIG_WERROR 1 00:11:21.408 #define SPDK_CONFIG_WPDK_DIR 00:11:21.408 #undef SPDK_CONFIG_XNVME 00:11:21.408 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power ]] 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : rdma 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:21.408 10:00:05 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:21.408 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : mlx5 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j96 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=rdma 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2476768 ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2476768 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.5TjNL1 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5TjNL1/tests/target /tmp/spdk.5TjNL1 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953421824 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4331008000 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=189608386560 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=195974332416 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6365945856 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97973870592 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987166208 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=13295616 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=39171837952 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=39194869760 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=23031808 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=97986265088 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=97987166208 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=901120 00:11:21.409 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=19597426688 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=19597430784 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:21.410 * Looking for test storage... 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=189608386560 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8580538368 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.410 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:21.410 10:00:06 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:26.685 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:26.685 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:26.685 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:26.686 Found net devices under 0000:da:00.0: mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:26.686 Found net devices under 0000:da:00.1: mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@420 -- # rdma_device_init 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # uname 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:26.686 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.686 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:26.686 altname enp218s0f0np0 00:11:26.686 altname ens818f0np0 00:11:26.686 inet 192.168.100.8/24 scope global mlx_0_0 00:11:26.686 valid_lft forever preferred_lft forever 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:26.686 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:26.686 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:26.686 altname enp218s0f1np1 00:11:26.686 altname ens818f1np1 00:11:26.686 inet 192.168.100.9/24 scope global mlx_0_1 00:11:26.686 valid_lft forever preferred_lft forever 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # continue 2 00:11:26.686 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:26.687 192.168.100.9' 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:26.687 192.168.100.9' 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # head -n 1 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:26.687 192.168.100.9' 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # tail -n +2 00:11:26.687 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # head -n 1 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:26.944 ************************************ 00:11:26.944 START TEST nvmf_filesystem_no_in_capsule 00:11:26.944 ************************************ 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2480315 00:11:26.944 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2480315 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2480315 ']' 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:26.945 10:00:11 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.945 [2024-07-25 10:00:11.957186] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:26.945 [2024-07-25 10:00:11.957223] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.945 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.945 [2024-07-25 10:00:12.025403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:26.945 [2024-07-25 10:00:12.102087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.945 [2024-07-25 10:00:12.102130] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.945 [2024-07-25 10:00:12.102137] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.945 [2024-07-25 10:00:12.102143] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.945 [2024-07-25 10:00:12.102148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.945 [2024-07-25 10:00:12.102209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.945 [2024-07-25 10:00:12.102315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:26.945 [2024-07-25 10:00:12.102378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.945 [2024-07-25 10:00:12.102380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.882 [2024-07-25 10:00:12.808395] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:11:27.882 [2024-07-25 10:00:12.828806] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1c9fcc0/0x1ca41b0) succeed. 00:11:27.882 [2024-07-25 10:00:12.837942] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca1300/0x1ce5840) succeed. 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.882 10:00:12 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 Malloc1 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 [2024-07-25 10:00:13.077815] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:28.144 { 00:11:28.144 "name": "Malloc1", 00:11:28.144 "aliases": [ 00:11:28.144 "5be83ed0-4f9a-4714-8af4-2f426ef55fc6" 00:11:28.144 ], 00:11:28.144 "product_name": "Malloc disk", 00:11:28.144 "block_size": 512, 00:11:28.144 "num_blocks": 1048576, 00:11:28.144 "uuid": "5be83ed0-4f9a-4714-8af4-2f426ef55fc6", 00:11:28.144 "assigned_rate_limits": { 00:11:28.144 "rw_ios_per_sec": 0, 00:11:28.144 "rw_mbytes_per_sec": 0, 00:11:28.144 "r_mbytes_per_sec": 0, 00:11:28.144 "w_mbytes_per_sec": 0 00:11:28.144 }, 00:11:28.144 "claimed": true, 00:11:28.144 "claim_type": "exclusive_write", 00:11:28.144 "zoned": false, 00:11:28.144 "supported_io_types": { 00:11:28.144 "read": true, 00:11:28.144 "write": true, 00:11:28.144 "unmap": true, 00:11:28.144 "flush": true, 00:11:28.144 "reset": true, 00:11:28.144 "nvme_admin": false, 00:11:28.144 "nvme_io": false, 00:11:28.144 "nvme_io_md": false, 00:11:28.144 "write_zeroes": true, 00:11:28.144 "zcopy": true, 00:11:28.144 "get_zone_info": false, 00:11:28.144 "zone_management": false, 00:11:28.144 "zone_append": false, 00:11:28.144 "compare": false, 00:11:28.144 "compare_and_write": false, 00:11:28.144 "abort": true, 00:11:28.144 "seek_hole": false, 00:11:28.144 "seek_data": false, 00:11:28.144 "copy": true, 00:11:28.144 "nvme_iov_md": false 00:11:28.144 }, 00:11:28.144 "memory_domains": [ 00:11:28.144 { 00:11:28.144 "dma_device_id": "system", 00:11:28.144 "dma_device_type": 1 00:11:28.144 }, 00:11:28.144 { 00:11:28.144 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.144 "dma_device_type": 2 00:11:28.144 } 00:11:28.144 ], 00:11:28.144 "driver_specific": {} 00:11:28.144 } 00:11:28.144 ]' 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:28.144 10:00:13 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:29.110 10:00:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.110 10:00:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:29.110 10:00:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.110 10:00:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:29.110 10:00:14 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:31.011 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:31.011 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:31.011 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:31.269 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:31.270 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:31.270 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:31.270 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:31.270 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:31.270 10:00:16 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.204 ************************************ 00:11:32.204 START TEST filesystem_ext4 00:11:32.204 ************************************ 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:32.204 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:32.204 mke2fs 1.46.5 (30-Dec-2021) 00:11:32.463 Discarding device blocks: 0/522240 done 00:11:32.464 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:32.464 Filesystem UUID: c1fa96e8-85a7-46ed-8916-d7ccb53e5f4d 00:11:32.464 Superblock backups stored on blocks: 00:11:32.464 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:32.464 00:11:32.464 Allocating group tables: 0/64 done 00:11:32.464 Writing inode tables: 0/64 done 00:11:32.464 Creating journal (8192 blocks): done 00:11:32.464 Writing superblocks and filesystem accounting information: 0/64 done 00:11:32.464 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2480315 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.464 00:11:32.464 real 0m0.176s 00:11:32.464 user 0m0.021s 00:11:32.464 sys 0m0.067s 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:32.464 ************************************ 00:11:32.464 END TEST filesystem_ext4 00:11:32.464 ************************************ 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.464 ************************************ 00:11:32.464 START TEST filesystem_btrfs 00:11:32.464 ************************************ 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:32.464 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:32.723 btrfs-progs v6.6.2 00:11:32.723 See https://btrfs.readthedocs.io for more information. 00:11:32.723 00:11:32.723 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:32.723 NOTE: several default settings have changed in version 5.15, please make sure 00:11:32.723 this does not affect your deployments: 00:11:32.723 - DUP for metadata (-m dup) 00:11:32.723 - enabled no-holes (-O no-holes) 00:11:32.723 - enabled free-space-tree (-R free-space-tree) 00:11:32.723 00:11:32.723 Label: (null) 00:11:32.723 UUID: 1c90eabe-59f1-49c5-96df-d0511c88ee2f 00:11:32.723 Node size: 16384 00:11:32.723 Sector size: 4096 00:11:32.723 Filesystem size: 510.00MiB 00:11:32.723 Block group profiles: 00:11:32.723 Data: single 8.00MiB 00:11:32.723 Metadata: DUP 32.00MiB 00:11:32.723 System: DUP 8.00MiB 00:11:32.723 SSD detected: yes 00:11:32.723 Zoned device: no 00:11:32.723 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:32.723 Runtime features: free-space-tree 00:11:32.723 Checksum: crc32c 00:11:32.723 Number of devices: 1 00:11:32.723 Devices: 00:11:32.723 ID SIZE PATH 00:11:32.723 1 510.00MiB /dev/nvme0n1p1 00:11:32.723 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2480315 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.723 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.724 00:11:32.724 real 0m0.248s 00:11:32.724 user 0m0.031s 00:11:32.724 sys 0m0.121s 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.724 ************************************ 00:11:32.724 END TEST filesystem_btrfs 00:11:32.724 ************************************ 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:32.724 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:32.982 ************************************ 00:11:32.982 START TEST filesystem_xfs 00:11:32.982 ************************************ 00:11:32.982 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:32.982 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:32.983 10:00:17 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:32.983 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:32.983 = sectsz=512 attr=2, projid32bit=1 00:11:32.983 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:32.983 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:32.983 data = bsize=4096 blocks=130560, imaxpct=25 00:11:32.983 = sunit=0 swidth=0 blks 00:11:32.983 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:32.983 log =internal log bsize=4096 blocks=16384, version=2 00:11:32.983 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:32.983 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:32.983 Discarding blocks...Done. 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2480315 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:32.983 00:11:32.983 real 0m0.199s 00:11:32.983 user 0m0.023s 00:11:32.983 sys 0m0.067s 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:32.983 ************************************ 00:11:32.983 END TEST filesystem_xfs 00:11:32.983 ************************************ 00:11:32.983 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:33.241 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:33.241 10:00:18 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2480315 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2480315 ']' 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2480315 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2480315 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2480315' 00:11:34.176 killing process with pid 2480315 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2480315 00:11:34.176 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2480315 00:11:34.435 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:34.435 00:11:34.435 real 0m7.673s 00:11:34.435 user 0m29.879s 00:11:34.435 sys 0m1.062s 00:11:34.435 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:34.435 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.435 ************************************ 00:11:34.435 END TEST nvmf_filesystem_no_in_capsule 00:11:34.435 ************************************ 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:34.695 ************************************ 00:11:34.695 START TEST nvmf_filesystem_in_capsule 00:11:34.695 ************************************ 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2481794 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2481794 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2481794 ']' 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:34.695 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.695 [2024-07-25 10:00:19.703842] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:34.695 [2024-07-25 10:00:19.703879] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.695 EAL: No free 2048 kB hugepages reported on node 1 00:11:34.695 [2024-07-25 10:00:19.755156] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:34.695 [2024-07-25 10:00:19.832961] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:34.695 [2024-07-25 10:00:19.832998] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:34.695 [2024-07-25 10:00:19.833005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:34.695 [2024-07-25 10:00:19.833011] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:34.695 [2024-07-25 10:00:19.833036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:34.695 [2024-07-25 10:00:19.836144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:34.695 [2024-07-25 10:00:19.836181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:34.695 [2024-07-25 10:00:19.836288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.695 [2024-07-25 10:00:19.836288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.954 10:00:19 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.954 [2024-07-25 10:00:20.005925] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6fdcc0/0x7021b0) succeed. 00:11:34.954 [2024-07-25 10:00:20.015150] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6ff300/0x743840) succeed. 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.212 Malloc1 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.212 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.213 [2024-07-25 10:00:20.276298] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:35.213 { 00:11:35.213 "name": "Malloc1", 00:11:35.213 "aliases": [ 00:11:35.213 "09fa7b7f-474c-46f7-a1e0-f85982e76b19" 00:11:35.213 ], 00:11:35.213 "product_name": "Malloc disk", 00:11:35.213 "block_size": 512, 00:11:35.213 "num_blocks": 1048576, 00:11:35.213 "uuid": "09fa7b7f-474c-46f7-a1e0-f85982e76b19", 00:11:35.213 "assigned_rate_limits": { 00:11:35.213 "rw_ios_per_sec": 0, 00:11:35.213 "rw_mbytes_per_sec": 0, 00:11:35.213 "r_mbytes_per_sec": 0, 00:11:35.213 "w_mbytes_per_sec": 0 00:11:35.213 }, 00:11:35.213 "claimed": true, 00:11:35.213 "claim_type": "exclusive_write", 00:11:35.213 "zoned": false, 00:11:35.213 "supported_io_types": { 00:11:35.213 "read": true, 00:11:35.213 "write": true, 00:11:35.213 "unmap": true, 00:11:35.213 "flush": true, 00:11:35.213 "reset": true, 00:11:35.213 "nvme_admin": false, 00:11:35.213 "nvme_io": false, 00:11:35.213 "nvme_io_md": false, 00:11:35.213 "write_zeroes": true, 00:11:35.213 "zcopy": true, 00:11:35.213 "get_zone_info": false, 00:11:35.213 "zone_management": false, 00:11:35.213 "zone_append": false, 00:11:35.213 "compare": false, 00:11:35.213 "compare_and_write": false, 00:11:35.213 "abort": true, 00:11:35.213 "seek_hole": false, 00:11:35.213 "seek_data": false, 00:11:35.213 "copy": true, 00:11:35.213 "nvme_iov_md": false 00:11:35.213 }, 00:11:35.213 "memory_domains": [ 00:11:35.213 { 00:11:35.213 "dma_device_id": "system", 00:11:35.213 "dma_device_type": 1 00:11:35.213 }, 00:11:35.213 { 00:11:35.213 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.213 "dma_device_type": 2 00:11:35.213 } 00:11:35.213 ], 00:11:35.213 "driver_specific": {} 00:11:35.213 } 00:11:35.213 ]' 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:35.213 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:35.497 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:35.497 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:35.497 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:35.497 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:35.497 10:00:20 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:11:36.431 10:00:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.431 10:00:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:36.431 10:00:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.431 10:00:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:36.431 10:00:21 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.336 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:38.337 10:00:23 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.718 ************************************ 00:11:39.718 START TEST filesystem_in_capsule_ext4 00:11:39.718 ************************************ 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:39.718 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:39.719 mke2fs 1.46.5 (30-Dec-2021) 00:11:39.719 Discarding device blocks: 0/522240 done 00:11:39.719 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:39.719 Filesystem UUID: de04e181-ffc6-4c03-9cf5-85265e228e6a 00:11:39.719 Superblock backups stored on blocks: 00:11:39.719 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:39.719 00:11:39.719 Allocating group tables: 0/64 done 00:11:39.719 Writing inode tables: 0/64 done 00:11:39.719 Creating journal (8192 blocks): done 00:11:39.719 Writing superblocks and filesystem accounting information: 0/64 done 00:11:39.719 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2481794 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.719 00:11:39.719 real 0m0.174s 00:11:39.719 user 0m0.020s 00:11:39.719 sys 0m0.065s 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:39.719 ************************************ 00:11:39.719 END TEST filesystem_in_capsule_ext4 00:11:39.719 ************************************ 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.719 ************************************ 00:11:39.719 START TEST filesystem_in_capsule_btrfs 00:11:39.719 ************************************ 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:39.719 btrfs-progs v6.6.2 00:11:39.719 See https://btrfs.readthedocs.io for more information. 00:11:39.719 00:11:39.719 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:39.719 NOTE: several default settings have changed in version 5.15, please make sure 00:11:39.719 this does not affect your deployments: 00:11:39.719 - DUP for metadata (-m dup) 00:11:39.719 - enabled no-holes (-O no-holes) 00:11:39.719 - enabled free-space-tree (-R free-space-tree) 00:11:39.719 00:11:39.719 Label: (null) 00:11:39.719 UUID: 1cc1496d-4b9e-495f-b4cd-24614bf8dd8b 00:11:39.719 Node size: 16384 00:11:39.719 Sector size: 4096 00:11:39.719 Filesystem size: 510.00MiB 00:11:39.719 Block group profiles: 00:11:39.719 Data: single 8.00MiB 00:11:39.719 Metadata: DUP 32.00MiB 00:11:39.719 System: DUP 8.00MiB 00:11:39.719 SSD detected: yes 00:11:39.719 Zoned device: no 00:11:39.719 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:39.719 Runtime features: free-space-tree 00:11:39.719 Checksum: crc32c 00:11:39.719 Number of devices: 1 00:11:39.719 Devices: 00:11:39.719 ID SIZE PATH 00:11:39.719 1 510.00MiB /dev/nvme0n1p1 00:11:39.719 00:11:39.719 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2481794 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:39.978 10:00:24 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:39.978 00:11:39.978 real 0m0.242s 00:11:39.978 user 0m0.016s 00:11:39.978 sys 0m0.130s 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:39.978 ************************************ 00:11:39.978 END TEST filesystem_in_capsule_btrfs 00:11:39.978 ************************************ 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:39.978 ************************************ 00:11:39.978 START TEST filesystem_in_capsule_xfs 00:11:39.978 ************************************ 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:39.978 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:40.236 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:40.236 = sectsz=512 attr=2, projid32bit=1 00:11:40.236 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:40.236 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:40.236 data = bsize=4096 blocks=130560, imaxpct=25 00:11:40.236 = sunit=0 swidth=0 blks 00:11:40.236 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:40.236 log =internal log bsize=4096 blocks=16384, version=2 00:11:40.236 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:40.236 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:40.236 Discarding blocks...Done. 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2481794 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:40.236 00:11:40.236 real 0m0.195s 00:11:40.236 user 0m0.031s 00:11:40.236 sys 0m0.058s 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:40.236 ************************************ 00:11:40.236 END TEST filesystem_in_capsule_xfs 00:11:40.236 ************************************ 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:40.236 10:00:25 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2481794 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2481794 ']' 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2481794 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:41.169 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2481794 00:11:41.428 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:41.428 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:41.428 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2481794' 00:11:41.428 killing process with pid 2481794 00:11:41.428 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2481794 00:11:41.428 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2481794 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:41.687 00:11:41.687 real 0m7.134s 00:11:41.687 user 0m27.622s 00:11:41.687 sys 0m1.010s 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.687 ************************************ 00:11:41.687 END TEST nvmf_filesystem_in_capsule 00:11:41.687 ************************************ 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:41.687 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:41.687 rmmod nvme_rdma 00:11:41.687 rmmod nvme_fabrics 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:41.946 00:11:41.946 real 0m21.029s 00:11:41.946 user 0m59.351s 00:11:41.946 sys 0m6.599s 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 ************************************ 00:11:41.946 END TEST nvmf_filesystem 00:11:41.946 ************************************ 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.946 ************************************ 00:11:41.946 START TEST nvmf_target_discovery 00:11:41.946 ************************************ 00:11:41.946 10:00:26 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:11:41.946 * Looking for test storage... 00:11:41.946 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:41.946 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.947 10:00:27 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:48.520 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:48.520 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.520 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:48.521 Found net devices under 0000:da:00.0: mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:48.521 Found net devices under 0000:da:00.1: mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@420 -- # rdma_device_init 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # uname 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:48.521 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.521 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:48.521 altname enp218s0f0np0 00:11:48.521 altname ens818f0np0 00:11:48.521 inet 192.168.100.8/24 scope global mlx_0_0 00:11:48.521 valid_lft forever preferred_lft forever 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:48.521 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:48.521 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:48.521 altname enp218s0f1np1 00:11:48.521 altname ens818f1np1 00:11:48.521 inet 192.168.100.9/24 scope global mlx_0_1 00:11:48.521 valid_lft forever preferred_lft forever 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # continue 2 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:48.521 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:48.522 192.168.100.9' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:48.522 192.168.100.9' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # head -n 1 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:48.522 192.168.100.9' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # tail -n +2 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # head -n 1 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2486207 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2486207 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2486207 ']' 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:48.522 10:00:32 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.522 [2024-07-25 10:00:32.806906] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:48.522 [2024-07-25 10:00:32.806950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:48.522 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.522 [2024-07-25 10:00:32.875517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:48.522 [2024-07-25 10:00:32.954245] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:48.522 [2024-07-25 10:00:32.954280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:48.522 [2024-07-25 10:00:32.954287] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:48.522 [2024-07-25 10:00:32.954293] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:48.522 [2024-07-25 10:00:32.954298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:48.522 [2024-07-25 10:00:32.954362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.522 [2024-07-25 10:00:32.954471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:48.522 [2024-07-25 10:00:32.954577] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.522 [2024-07-25 10:00:32.954578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.522 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.522 [2024-07-25 10:00:33.676079] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf90cc0/0xf951b0) succeed. 00:11:48.782 [2024-07-25 10:00:33.685233] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf92300/0xfd6840) succeed. 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.782 Null1 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.782 [2024-07-25 10:00:33.843717] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.782 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 Null2 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 Null3 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 Null4 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.065 10:00:33 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:11:49.065 00:11:49.065 Discovery Log Number of Records 6, Generation counter 6 00:11:49.065 =====Discovery Log Entry 0====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: current discovery subsystem 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4420 00:11:49.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: explicit discovery connections, duplicate discovery information 00:11:49.065 rdma_prtype: not specified 00:11:49.065 rdma_qptype: connected 00:11:49.065 rdma_cms: rdma-cm 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 =====Discovery Log Entry 1====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: nvme subsystem 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4420 00:11:49.065 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: none 00:11:49.065 rdma_prtype: not specified 00:11:49.065 rdma_qptype: connected 00:11:49.065 rdma_cms: rdma-cm 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 =====Discovery Log Entry 2====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: nvme subsystem 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4420 00:11:49.065 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: none 00:11:49.065 rdma_prtype: not specified 00:11:49.065 rdma_qptype: connected 00:11:49.065 rdma_cms: rdma-cm 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 =====Discovery Log Entry 3====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: nvme subsystem 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4420 00:11:49.065 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: none 00:11:49.065 rdma_prtype: not specified 00:11:49.065 rdma_qptype: connected 00:11:49.065 rdma_cms: rdma-cm 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 =====Discovery Log Entry 4====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: nvme subsystem 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4420 00:11:49.065 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: none 00:11:49.065 rdma_prtype: not specified 00:11:49.065 rdma_qptype: connected 00:11:49.065 rdma_cms: rdma-cm 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 =====Discovery Log Entry 5====== 00:11:49.065 trtype: rdma 00:11:49.065 adrfam: ipv4 00:11:49.065 subtype: discovery subsystem referral 00:11:49.065 treq: not required 00:11:49.065 portid: 0 00:11:49.065 trsvcid: 4430 00:11:49.065 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:49.065 traddr: 192.168.100.8 00:11:49.065 eflags: none 00:11:49.065 rdma_prtype: unrecognized 00:11:49.065 rdma_qptype: unrecognized 00:11:49.065 rdma_cms: unrecognized 00:11:49.065 rdma_pkey: 0x0000 00:11:49.065 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:49.065 Perform nvmf subsystem discovery via RPC 00:11:49.065 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:49.065 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.065 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.065 [ 00:11:49.065 { 00:11:49.065 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:49.065 "subtype": "Discovery", 00:11:49.065 "listen_addresses": [ 00:11:49.065 { 00:11:49.065 "trtype": "RDMA", 00:11:49.065 "adrfam": "IPv4", 00:11:49.065 "traddr": "192.168.100.8", 00:11:49.065 "trsvcid": "4420" 00:11:49.065 } 00:11:49.065 ], 00:11:49.065 "allow_any_host": true, 00:11:49.065 "hosts": [] 00:11:49.065 }, 00:11:49.065 { 00:11:49.065 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:49.065 "subtype": "NVMe", 00:11:49.065 "listen_addresses": [ 00:11:49.065 { 00:11:49.065 "trtype": "RDMA", 00:11:49.065 "adrfam": "IPv4", 00:11:49.065 "traddr": "192.168.100.8", 00:11:49.065 "trsvcid": "4420" 00:11:49.065 } 00:11:49.065 ], 00:11:49.065 "allow_any_host": true, 00:11:49.065 "hosts": [], 00:11:49.065 "serial_number": "SPDK00000000000001", 00:11:49.065 "model_number": "SPDK bdev Controller", 00:11:49.065 "max_namespaces": 32, 00:11:49.065 "min_cntlid": 1, 00:11:49.065 "max_cntlid": 65519, 00:11:49.065 "namespaces": [ 00:11:49.065 { 00:11:49.065 "nsid": 1, 00:11:49.065 "bdev_name": "Null1", 00:11:49.065 "name": "Null1", 00:11:49.065 "nguid": "D1DCA07F0DD942D195D1B53F41784AD9", 00:11:49.065 "uuid": "d1dca07f-0dd9-42d1-95d1-b53f41784ad9" 00:11:49.065 } 00:11:49.065 ] 00:11:49.065 }, 00:11:49.065 { 00:11:49.065 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:49.065 "subtype": "NVMe", 00:11:49.065 "listen_addresses": [ 00:11:49.065 { 00:11:49.065 "trtype": "RDMA", 00:11:49.065 "adrfam": "IPv4", 00:11:49.065 "traddr": "192.168.100.8", 00:11:49.065 "trsvcid": "4420" 00:11:49.065 } 00:11:49.065 ], 00:11:49.065 "allow_any_host": true, 00:11:49.065 "hosts": [], 00:11:49.065 "serial_number": "SPDK00000000000002", 00:11:49.065 "model_number": "SPDK bdev Controller", 00:11:49.065 "max_namespaces": 32, 00:11:49.065 "min_cntlid": 1, 00:11:49.065 "max_cntlid": 65519, 00:11:49.065 "namespaces": [ 00:11:49.065 { 00:11:49.065 "nsid": 1, 00:11:49.065 "bdev_name": "Null2", 00:11:49.065 "name": "Null2", 00:11:49.065 "nguid": "8BE3669A143A4626AE0CA9142A3465A2", 00:11:49.065 "uuid": "8be3669a-143a-4626-ae0c-a9142a3465a2" 00:11:49.065 } 00:11:49.065 ] 00:11:49.065 }, 00:11:49.065 { 00:11:49.065 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:49.065 "subtype": "NVMe", 00:11:49.065 "listen_addresses": [ 00:11:49.065 { 00:11:49.065 "trtype": "RDMA", 00:11:49.065 "adrfam": "IPv4", 00:11:49.065 "traddr": "192.168.100.8", 00:11:49.065 "trsvcid": "4420" 00:11:49.065 } 00:11:49.065 ], 00:11:49.065 "allow_any_host": true, 00:11:49.065 "hosts": [], 00:11:49.065 "serial_number": "SPDK00000000000003", 00:11:49.065 "model_number": "SPDK bdev Controller", 00:11:49.065 "max_namespaces": 32, 00:11:49.065 "min_cntlid": 1, 00:11:49.065 "max_cntlid": 65519, 00:11:49.065 "namespaces": [ 00:11:49.065 { 00:11:49.065 "nsid": 1, 00:11:49.065 "bdev_name": "Null3", 00:11:49.065 "name": "Null3", 00:11:49.065 "nguid": "8284513DCEA14010BEEB6C22FBC3CE74", 00:11:49.065 "uuid": "8284513d-cea1-4010-beeb-6c22fbc3ce74" 00:11:49.065 } 00:11:49.065 ] 00:11:49.065 }, 00:11:49.065 { 00:11:49.065 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:49.065 "subtype": "NVMe", 00:11:49.065 "listen_addresses": [ 00:11:49.065 { 00:11:49.065 "trtype": "RDMA", 00:11:49.065 "adrfam": "IPv4", 00:11:49.065 "traddr": "192.168.100.8", 00:11:49.065 "trsvcid": "4420" 00:11:49.065 } 00:11:49.065 ], 00:11:49.065 "allow_any_host": true, 00:11:49.065 "hosts": [], 00:11:49.065 "serial_number": "SPDK00000000000004", 00:11:49.065 "model_number": "SPDK bdev Controller", 00:11:49.065 "max_namespaces": 32, 00:11:49.065 "min_cntlid": 1, 00:11:49.065 "max_cntlid": 65519, 00:11:49.065 "namespaces": [ 00:11:49.065 { 00:11:49.066 "nsid": 1, 00:11:49.066 "bdev_name": "Null4", 00:11:49.066 "name": "Null4", 00:11:49.066 "nguid": "0A218D9F0FA4432D94708CCD93ADAB06", 00:11:49.066 "uuid": "0a218d9f-0fa4-432d-9470-8ccd93adab06" 00:11:49.066 } 00:11:49.066 ] 00:11:49.066 } 00:11:49.066 ] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:49.066 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:49.066 rmmod nvme_rdma 00:11:49.331 rmmod nvme_fabrics 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2486207 ']' 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2486207 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2486207 ']' 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2486207 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2486207 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2486207' 00:11:49.331 killing process with pid 2486207 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2486207 00:11:49.331 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2486207 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:49.590 00:11:49.590 real 0m7.626s 00:11:49.590 user 0m8.102s 00:11:49.590 sys 0m4.694s 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:49.590 ************************************ 00:11:49.590 END TEST nvmf_target_discovery 00:11:49.590 ************************************ 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:49.590 ************************************ 00:11:49.590 START TEST nvmf_referrals 00:11:49.590 ************************************ 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:11:49.590 * Looking for test storage... 00:11:49.590 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:49.590 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:11:49.849 10:00:34 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:11:55.123 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:11:55.123 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:11:55.123 Found net devices under 0000:da:00.0: mlx_0_0 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.123 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:11:55.124 Found net devices under 0000:da:00.1: mlx_0_1 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@420 -- # rdma_device_init 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # uname 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@62 -- # modprobe ib_cm 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@63 -- # modprobe ib_core 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@64 -- # modprobe ib_umad 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@66 -- # modprobe iw_cm 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # allocate_nic_ips 00:11:55.124 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # get_rdma_if_list 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:11:55.382 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:11:55.383 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.383 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:11:55.383 altname enp218s0f0np0 00:11:55.383 altname ens818f0np0 00:11:55.383 inet 192.168.100.8/24 scope global mlx_0_0 00:11:55.383 valid_lft forever preferred_lft forever 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:11:55.383 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:11:55.383 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:11:55.383 altname enp218s0f1np1 00:11:55.383 altname ens818f1np1 00:11:55.383 inet 192.168.100.9/24 scope global mlx_0_1 00:11:55.383 valid_lft forever preferred_lft forever 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # get_rdma_if_list 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # echo mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # continue 2 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # awk '{print $4}' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@113 -- # cut -d/ -f1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:11:55.383 192.168.100.9' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:11:55.383 192.168.100.9' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # head -n 1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:11:55.383 192.168.100.9' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # tail -n +2 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # head -n 1 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2489741 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2489741 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2489741 ']' 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:55.383 10:00:40 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.383 [2024-07-25 10:00:40.489040] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:55.383 [2024-07-25 10:00:40.489085] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.383 EAL: No free 2048 kB hugepages reported on node 1 00:11:55.642 [2024-07-25 10:00:40.556713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:55.642 [2024-07-25 10:00:40.640291] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:55.642 [2024-07-25 10:00:40.640326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:55.642 [2024-07-25 10:00:40.640334] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:55.642 [2024-07-25 10:00:40.640340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:55.642 [2024-07-25 10:00:40.640345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:55.642 [2024-07-25 10:00:40.640391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:55.642 [2024-07-25 10:00:40.640420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:55.642 [2024-07-25 10:00:40.640451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.642 [2024-07-25 10:00:40.640452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.208 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.208 [2024-07-25 10:00:41.358360] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x611cc0/0x6161b0) succeed. 00:11:56.208 [2024-07-25 10:00:41.367408] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x613300/0x657840) succeed. 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 [2024-07-25 10:00:41.488837] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.466 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.724 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:56.983 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:56.984 10:00:41 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:56.984 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:57.242 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:57.243 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 8009 -o json 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:57.502 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:11:57.760 rmmod nvme_rdma 00:11:57.760 rmmod nvme_fabrics 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2489741 ']' 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2489741 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2489741 ']' 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2489741 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2489741 00:11:57.760 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.761 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.761 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2489741' 00:11:57.761 killing process with pid 2489741 00:11:57.761 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2489741 00:11:57.761 10:00:42 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2489741 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:11:58.020 00:11:58.020 real 0m8.405s 00:11:58.020 user 0m11.864s 00:11:58.020 sys 0m5.028s 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:58.020 ************************************ 00:11:58.020 END TEST nvmf_referrals 00:11:58.020 ************************************ 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:58.020 ************************************ 00:11:58.020 START TEST nvmf_connect_disconnect 00:11:58.020 ************************************ 00:11:58.020 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:11:58.278 * Looking for test storage... 00:11:58.278 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:58.278 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:11:58.279 10:00:43 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:03.553 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:03.553 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:03.553 Found net devices under 0000:da:00.0: mlx_0_0 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:03.553 Found net devices under 0000:da:00.1: mlx_0_1 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # uname 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:03.553 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:03.814 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.814 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:03.814 altname enp218s0f0np0 00:12:03.814 altname ens818f0np0 00:12:03.814 inet 192.168.100.8/24 scope global mlx_0_0 00:12:03.814 valid_lft forever preferred_lft forever 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:03.814 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:03.814 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:03.814 altname enp218s0f1np1 00:12:03.814 altname ens818f1np1 00:12:03.814 inet 192.168.100.9/24 scope global mlx_0_1 00:12:03.814 valid_lft forever preferred_lft forever 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # continue 2 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:03.814 192.168.100.9' 00:12:03.814 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:03.814 192.168.100.9' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:03.815 192.168.100.9' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2493370 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2493370 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2493370 ']' 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:03.815 10:00:48 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:03.815 [2024-07-25 10:00:48.954516] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:03.815 [2024-07-25 10:00:48.954564] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.072 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.072 [2024-07-25 10:00:49.023337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.072 [2024-07-25 10:00:49.097040] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.072 [2024-07-25 10:00:49.097081] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.072 [2024-07-25 10:00:49.097088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.072 [2024-07-25 10:00:49.097094] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.072 [2024-07-25 10:00:49.097099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.072 [2024-07-25 10:00:49.097186] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.072 [2024-07-25 10:00:49.097312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.072 [2024-07-25 10:00:49.097397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.072 [2024-07-25 10:00:49.097398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.638 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.638 [2024-07-25 10:00:49.797502] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:12:04.896 [2024-07-25 10:00:49.817607] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f77cc0/0x1f7c1b0) succeed. 00:12:04.896 [2024-07-25 10:00:49.826734] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f79300/0x1fbd840) succeed. 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.896 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:04.897 [2024-07-25 10:00:49.965843] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:04.897 10:00:49 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:09.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.498 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:24.973 rmmod nvme_rdma 00:12:24.973 rmmod nvme_fabrics 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2493370 ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2493370 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2493370 ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2493370 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2493370 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2493370' 00:12:24.973 killing process with pid 2493370 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2493370 00:12:24.973 10:01:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2493370 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:24.973 00:12:24.973 real 0m26.928s 00:12:24.973 user 1m24.755s 00:12:24.973 sys 0m5.221s 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.973 ************************************ 00:12:24.973 END TEST nvmf_connect_disconnect 00:12:24.973 ************************************ 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:24.973 ************************************ 00:12:24.973 START TEST nvmf_multitarget 00:12:24.973 ************************************ 00:12:24.973 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:12:25.232 * Looking for test storage... 00:12:25.232 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.232 10:01:10 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.796 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:31.797 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:31.797 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:31.797 Found net devices under 0000:da:00.0: mlx_0_0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:31.797 Found net devices under 0000:da:00.1: mlx_0_1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@420 -- # rdma_device_init 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # uname 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:31.797 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:31.797 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:31.797 altname enp218s0f0np0 00:12:31.797 altname ens818f0np0 00:12:31.797 inet 192.168.100.8/24 scope global mlx_0_0 00:12:31.797 valid_lft forever preferred_lft forever 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:31.797 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:31.797 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:31.797 altname enp218s0f1np1 00:12:31.797 altname ens818f1np1 00:12:31.797 inet 192.168.100.9/24 scope global mlx_0_1 00:12:31.797 valid_lft forever preferred_lft forever 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:31.797 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # continue 2 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:31.798 192.168.100.9' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:31.798 192.168.100.9' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # head -n 1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:31.798 192.168.100.9' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # tail -n +2 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # head -n 1 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2499997 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2499997 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2499997 ']' 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:31.798 10:01:15 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.798 [2024-07-25 10:01:15.973260] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:31.798 [2024-07-25 10:01:15.973308] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.798 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.798 [2024-07-25 10:01:16.041413] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.798 [2024-07-25 10:01:16.115305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.798 [2024-07-25 10:01:16.115345] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.798 [2024-07-25 10:01:16.115351] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.798 [2024-07-25 10:01:16.115357] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.798 [2024-07-25 10:01:16.115361] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.798 [2024-07-25 10:01:16.115442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.798 [2024-07-25 10:01:16.115548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.798 [2024-07-25 10:01:16.115635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.798 [2024-07-25 10:01:16.115636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:31.798 10:01:16 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:32.057 "nvmf_tgt_1" 00:12:32.057 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:32.057 "nvmf_tgt_2" 00:12:32.057 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.057 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:32.315 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:32.315 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:32.315 true 00:12:32.315 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:32.315 true 00:12:32.315 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:32.315 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:12:32.573 rmmod nvme_rdma 00:12:32.573 rmmod nvme_fabrics 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2499997 ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2499997 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2499997 ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2499997 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2499997 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2499997' 00:12:32.573 killing process with pid 2499997 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2499997 00:12:32.573 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2499997 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:12:32.833 00:12:32.833 real 0m7.703s 00:12:32.833 user 0m9.156s 00:12:32.833 sys 0m4.722s 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:32.833 ************************************ 00:12:32.833 END TEST nvmf_multitarget 00:12:32.833 ************************************ 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:32.833 ************************************ 00:12:32.833 START TEST nvmf_rpc 00:12:32.833 ************************************ 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:12:32.833 * Looking for test storage... 00:12:32.833 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.833 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.834 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:33.093 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:33.093 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:33.093 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:33.093 10:01:17 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:12:38.374 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:12:38.374 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:12:38.374 Found net devices under 0000:da:00.0: mlx_0_0 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:12:38.374 Found net devices under 0000:da:00.1: mlx_0_1 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@420 -- # rdma_device_init 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # uname 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@62 -- # modprobe ib_cm 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@63 -- # modprobe ib_core 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@64 -- # modprobe ib_umad 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@66 -- # modprobe iw_cm 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # allocate_nic_ips 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # get_rdma_if_list 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:12:38.374 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:12:38.375 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:38.375 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:12:38.375 altname enp218s0f0np0 00:12:38.375 altname ens818f0np0 00:12:38.375 inet 192.168.100.8/24 scope global mlx_0_0 00:12:38.375 valid_lft forever preferred_lft forever 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:12:38.375 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:12:38.375 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:12:38.375 altname enp218s0f1np1 00:12:38.375 altname ens818f1np1 00:12:38.375 inet 192.168.100.9/24 scope global mlx_0_1 00:12:38.375 valid_lft forever preferred_lft forever 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # get_rdma_if_list 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_0 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # echo mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # continue 2 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # awk '{print $4}' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@113 -- # cut -d/ -f1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:12:38.375 192.168.100.9' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:12:38.375 192.168.100.9' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # head -n 1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:12:38.375 192.168.100.9' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # tail -n +2 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # head -n 1 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:12:38.375 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2503319 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2503319 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2503319 ']' 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:38.633 10:01:23 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.633 [2024-07-25 10:01:23.593950] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:38.633 [2024-07-25 10:01:23.594010] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.633 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.633 [2024-07-25 10:01:23.663990] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.633 [2024-07-25 10:01:23.745532] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.633 [2024-07-25 10:01:23.745568] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.633 [2024-07-25 10:01:23.745575] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.633 [2024-07-25 10:01:23.745581] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.633 [2024-07-25 10:01:23.745586] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.633 [2024-07-25 10:01:23.745641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:38.633 [2024-07-25 10:01:23.745799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.633 [2024-07-25 10:01:23.745672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:38.633 [2024-07-25 10:01:23.745801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.568 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:39.569 "tick_rate": 2100000000, 00:12:39.569 "poll_groups": [ 00:12:39.569 { 00:12:39.569 "name": "nvmf_tgt_poll_group_000", 00:12:39.569 "admin_qpairs": 0, 00:12:39.569 "io_qpairs": 0, 00:12:39.569 "current_admin_qpairs": 0, 00:12:39.569 "current_io_qpairs": 0, 00:12:39.569 "pending_bdev_io": 0, 00:12:39.569 "completed_nvme_io": 0, 00:12:39.569 "transports": [] 00:12:39.569 }, 00:12:39.569 { 00:12:39.569 "name": "nvmf_tgt_poll_group_001", 00:12:39.569 "admin_qpairs": 0, 00:12:39.569 "io_qpairs": 0, 00:12:39.569 "current_admin_qpairs": 0, 00:12:39.569 "current_io_qpairs": 0, 00:12:39.569 "pending_bdev_io": 0, 00:12:39.569 "completed_nvme_io": 0, 00:12:39.569 "transports": [] 00:12:39.569 }, 00:12:39.569 { 00:12:39.569 "name": "nvmf_tgt_poll_group_002", 00:12:39.569 "admin_qpairs": 0, 00:12:39.569 "io_qpairs": 0, 00:12:39.569 "current_admin_qpairs": 0, 00:12:39.569 "current_io_qpairs": 0, 00:12:39.569 "pending_bdev_io": 0, 00:12:39.569 "completed_nvme_io": 0, 00:12:39.569 "transports": [] 00:12:39.569 }, 00:12:39.569 { 00:12:39.569 "name": "nvmf_tgt_poll_group_003", 00:12:39.569 "admin_qpairs": 0, 00:12:39.569 "io_qpairs": 0, 00:12:39.569 "current_admin_qpairs": 0, 00:12:39.569 "current_io_qpairs": 0, 00:12:39.569 "pending_bdev_io": 0, 00:12:39.569 "completed_nvme_io": 0, 00:12:39.569 "transports": [] 00:12:39.569 } 00:12:39.569 ] 00:12:39.569 }' 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.569 [2024-07-25 10:01:24.570053] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x8d0cd0/0x8d51c0) succeed. 00:12:39.569 [2024-07-25 10:01:24.579298] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x8d2310/0x916850) succeed. 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.569 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.843 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:39.843 "tick_rate": 2100000000, 00:12:39.843 "poll_groups": [ 00:12:39.843 { 00:12:39.843 "name": "nvmf_tgt_poll_group_000", 00:12:39.843 "admin_qpairs": 0, 00:12:39.843 "io_qpairs": 0, 00:12:39.843 "current_admin_qpairs": 0, 00:12:39.843 "current_io_qpairs": 0, 00:12:39.843 "pending_bdev_io": 0, 00:12:39.843 "completed_nvme_io": 0, 00:12:39.843 "transports": [ 00:12:39.843 { 00:12:39.843 "trtype": "RDMA", 00:12:39.843 "pending_data_buffer": 0, 00:12:39.843 "devices": [ 00:12:39.843 { 00:12:39.843 "name": "mlx5_0", 00:12:39.843 "polls": 14655, 00:12:39.843 "idle_polls": 14655, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "mlx5_1", 00:12:39.843 "polls": 14655, 00:12:39.843 "idle_polls": 14655, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "nvmf_tgt_poll_group_001", 00:12:39.843 "admin_qpairs": 0, 00:12:39.843 "io_qpairs": 0, 00:12:39.843 "current_admin_qpairs": 0, 00:12:39.843 "current_io_qpairs": 0, 00:12:39.843 "pending_bdev_io": 0, 00:12:39.843 "completed_nvme_io": 0, 00:12:39.843 "transports": [ 00:12:39.843 { 00:12:39.843 "trtype": "RDMA", 00:12:39.843 "pending_data_buffer": 0, 00:12:39.843 "devices": [ 00:12:39.843 { 00:12:39.843 "name": "mlx5_0", 00:12:39.843 "polls": 9734, 00:12:39.843 "idle_polls": 9734, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "mlx5_1", 00:12:39.843 "polls": 9734, 00:12:39.843 "idle_polls": 9734, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "nvmf_tgt_poll_group_002", 00:12:39.843 "admin_qpairs": 0, 00:12:39.843 "io_qpairs": 0, 00:12:39.843 "current_admin_qpairs": 0, 00:12:39.843 "current_io_qpairs": 0, 00:12:39.843 "pending_bdev_io": 0, 00:12:39.843 "completed_nvme_io": 0, 00:12:39.843 "transports": [ 00:12:39.843 { 00:12:39.843 "trtype": "RDMA", 00:12:39.843 "pending_data_buffer": 0, 00:12:39.843 "devices": [ 00:12:39.843 { 00:12:39.843 "name": "mlx5_0", 00:12:39.843 "polls": 5116, 00:12:39.843 "idle_polls": 5116, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "mlx5_1", 00:12:39.843 "polls": 5116, 00:12:39.843 "idle_polls": 5116, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.843 "total_send_wrs": 0, 00:12:39.843 "send_doorbell_updates": 0, 00:12:39.843 "total_recv_wrs": 4096, 00:12:39.843 "recv_doorbell_updates": 1 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 } 00:12:39.843 ] 00:12:39.843 }, 00:12:39.843 { 00:12:39.843 "name": "nvmf_tgt_poll_group_003", 00:12:39.843 "admin_qpairs": 0, 00:12:39.843 "io_qpairs": 0, 00:12:39.843 "current_admin_qpairs": 0, 00:12:39.843 "current_io_qpairs": 0, 00:12:39.843 "pending_bdev_io": 0, 00:12:39.843 "completed_nvme_io": 0, 00:12:39.843 "transports": [ 00:12:39.843 { 00:12:39.843 "trtype": "RDMA", 00:12:39.843 "pending_data_buffer": 0, 00:12:39.843 "devices": [ 00:12:39.843 { 00:12:39.843 "name": "mlx5_0", 00:12:39.843 "polls": 879, 00:12:39.843 "idle_polls": 879, 00:12:39.843 "completions": 0, 00:12:39.843 "requests": 0, 00:12:39.843 "request_latency": 0, 00:12:39.843 "pending_free_request": 0, 00:12:39.843 "pending_rdma_read": 0, 00:12:39.843 "pending_rdma_write": 0, 00:12:39.843 "pending_rdma_send": 0, 00:12:39.844 "total_send_wrs": 0, 00:12:39.844 "send_doorbell_updates": 0, 00:12:39.844 "total_recv_wrs": 4096, 00:12:39.844 "recv_doorbell_updates": 1 00:12:39.844 }, 00:12:39.844 { 00:12:39.844 "name": "mlx5_1", 00:12:39.844 "polls": 879, 00:12:39.844 "idle_polls": 879, 00:12:39.844 "completions": 0, 00:12:39.844 "requests": 0, 00:12:39.844 "request_latency": 0, 00:12:39.844 "pending_free_request": 0, 00:12:39.844 "pending_rdma_read": 0, 00:12:39.844 "pending_rdma_write": 0, 00:12:39.844 "pending_rdma_send": 0, 00:12:39.844 "total_send_wrs": 0, 00:12:39.844 "send_doorbell_updates": 0, 00:12:39.844 "total_recv_wrs": 4096, 00:12:39.844 "recv_doorbell_updates": 1 00:12:39.844 } 00:12:39.844 ] 00:12:39.844 } 00:12:39.844 ] 00:12:39.844 } 00:12:39.844 ] 00:12:39.844 }' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@41 -- # transport_type=RDMA 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 Malloc1 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:39.844 [2024-07-25 10:01:24.971943] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:39.844 10:01:24 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -s 4420 00:12:40.117 [2024-07-25 10:01:25.013775] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:40.117 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.117 could not add new controller: failed to write to nvme-fabrics device 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.117 10:01:25 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:41.050 10:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.050 10:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.050 10:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.050 10:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.050 10:01:26 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:42.948 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:43.882 10:01:28 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:43.882 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:44.140 [2024-07-25 10:01:29.045563] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:44.140 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:44.140 could not add new controller: failed to write to nvme-fabrics device 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.140 10:01:29 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:45.074 10:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.074 10:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:45.074 10:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.074 10:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:45.074 10:01:30 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:46.973 10:01:32 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.907 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.165 [2024-07-25 10:01:33.081835] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.165 10:01:33 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:49.098 10:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.098 10:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.098 10:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.098 10:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:49.098 10:01:34 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:51.000 10:01:36 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.935 [2024-07-25 10:01:37.074928] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.935 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.194 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.194 10:01:37 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:53.128 10:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:53.128 10:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:53.128 10:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:53.128 10:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:53.128 10:01:38 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:55.029 10:01:40 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:55.963 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.964 [2024-07-25 10:01:41.071040] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.964 10:01:41 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:12:56.897 10:01:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.897 10:01:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.897 10:01:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.897 10:01:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.897 10:01:42 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:59.479 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:59.479 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:59.479 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:59.480 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:59.480 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:59.480 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:59.480 10:01:44 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 [2024-07-25 10:01:45.064805] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.046 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.047 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.047 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.047 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.047 10:01:45 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:00.983 10:01:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:00.983 10:01:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:00.983 10:01:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.983 10:01:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:00.983 10:01:46 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:03.514 10:01:48 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:04.083 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 [2024-07-25 10:01:49.061834] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.083 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:04.084 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.084 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:04.084 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.084 10:01:49 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:13:05.018 10:01:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:05.018 10:01:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:05.018 10:01:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:05.018 10:01:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:05.018 10:01:50 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.918 10:01:52 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:07.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.852 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:07.852 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 [2024-07-25 10:01:53.076809] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.111 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 [2024-07-25 10:01:53.124951] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 [2024-07-25 10:01:53.177163] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 [2024-07-25 10:01:53.225369] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:08.112 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.375 [2024-07-25 10:01:53.273545] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.375 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.376 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:08.376 "tick_rate": 2100000000, 00:13:08.376 "poll_groups": [ 00:13:08.376 { 00:13:08.376 "name": "nvmf_tgt_poll_group_000", 00:13:08.376 "admin_qpairs": 2, 00:13:08.376 "io_qpairs": 27, 00:13:08.376 "current_admin_qpairs": 0, 00:13:08.376 "current_io_qpairs": 0, 00:13:08.376 "pending_bdev_io": 0, 00:13:08.376 "completed_nvme_io": 113, 00:13:08.376 "transports": [ 00:13:08.376 { 00:13:08.376 "trtype": "RDMA", 00:13:08.376 "pending_data_buffer": 0, 00:13:08.376 "devices": [ 00:13:08.376 { 00:13:08.376 "name": "mlx5_0", 00:13:08.376 "polls": 3379084, 00:13:08.376 "idle_polls": 3378790, 00:13:08.376 "completions": 335, 00:13:08.376 "requests": 167, 00:13:08.376 "request_latency": 30509890, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 279, 00:13:08.376 "send_doorbell_updates": 143, 00:13:08.376 "total_recv_wrs": 4263, 00:13:08.376 "recv_doorbell_updates": 143 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "mlx5_1", 00:13:08.376 "polls": 3379084, 00:13:08.376 "idle_polls": 3379084, 00:13:08.376 "completions": 0, 00:13:08.376 "requests": 0, 00:13:08.376 "request_latency": 0, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 0, 00:13:08.376 "send_doorbell_updates": 0, 00:13:08.376 "total_recv_wrs": 4096, 00:13:08.376 "recv_doorbell_updates": 1 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "nvmf_tgt_poll_group_001", 00:13:08.376 "admin_qpairs": 2, 00:13:08.376 "io_qpairs": 26, 00:13:08.376 "current_admin_qpairs": 0, 00:13:08.376 "current_io_qpairs": 0, 00:13:08.376 "pending_bdev_io": 0, 00:13:08.376 "completed_nvme_io": 77, 00:13:08.376 "transports": [ 00:13:08.376 { 00:13:08.376 "trtype": "RDMA", 00:13:08.376 "pending_data_buffer": 0, 00:13:08.376 "devices": [ 00:13:08.376 { 00:13:08.376 "name": "mlx5_0", 00:13:08.376 "polls": 3489573, 00:13:08.376 "idle_polls": 3489333, 00:13:08.376 "completions": 262, 00:13:08.376 "requests": 131, 00:13:08.376 "request_latency": 20604394, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 208, 00:13:08.376 "send_doorbell_updates": 118, 00:13:08.376 "total_recv_wrs": 4227, 00:13:08.376 "recv_doorbell_updates": 119 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "mlx5_1", 00:13:08.376 "polls": 3489573, 00:13:08.376 "idle_polls": 3489573, 00:13:08.376 "completions": 0, 00:13:08.376 "requests": 0, 00:13:08.376 "request_latency": 0, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 0, 00:13:08.376 "send_doorbell_updates": 0, 00:13:08.376 "total_recv_wrs": 4096, 00:13:08.376 "recv_doorbell_updates": 1 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "nvmf_tgt_poll_group_002", 00:13:08.376 "admin_qpairs": 1, 00:13:08.376 "io_qpairs": 26, 00:13:08.376 "current_admin_qpairs": 0, 00:13:08.376 "current_io_qpairs": 0, 00:13:08.376 "pending_bdev_io": 0, 00:13:08.376 "completed_nvme_io": 96, 00:13:08.376 "transports": [ 00:13:08.376 { 00:13:08.376 "trtype": "RDMA", 00:13:08.376 "pending_data_buffer": 0, 00:13:08.376 "devices": [ 00:13:08.376 { 00:13:08.376 "name": "mlx5_0", 00:13:08.376 "polls": 3407388, 00:13:08.376 "idle_polls": 3407161, 00:13:08.376 "completions": 249, 00:13:08.376 "requests": 124, 00:13:08.376 "request_latency": 19137554, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 208, 00:13:08.376 "send_doorbell_updates": 112, 00:13:08.376 "total_recv_wrs": 4220, 00:13:08.376 "recv_doorbell_updates": 112 00:13:08.376 }, 00:13:08.376 { 00:13:08.376 "name": "mlx5_1", 00:13:08.376 "polls": 3407388, 00:13:08.376 "idle_polls": 3407388, 00:13:08.376 "completions": 0, 00:13:08.376 "requests": 0, 00:13:08.376 "request_latency": 0, 00:13:08.376 "pending_free_request": 0, 00:13:08.376 "pending_rdma_read": 0, 00:13:08.376 "pending_rdma_write": 0, 00:13:08.376 "pending_rdma_send": 0, 00:13:08.376 "total_send_wrs": 0, 00:13:08.376 "send_doorbell_updates": 0, 00:13:08.376 "total_recv_wrs": 4096, 00:13:08.376 "recv_doorbell_updates": 1 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 } 00:13:08.376 ] 00:13:08.376 }, 00:13:08.376 { 00:13:08.377 "name": "nvmf_tgt_poll_group_003", 00:13:08.377 "admin_qpairs": 2, 00:13:08.377 "io_qpairs": 26, 00:13:08.377 "current_admin_qpairs": 0, 00:13:08.377 "current_io_qpairs": 0, 00:13:08.377 "pending_bdev_io": 0, 00:13:08.377 "completed_nvme_io": 169, 00:13:08.377 "transports": [ 00:13:08.377 { 00:13:08.377 "trtype": "RDMA", 00:13:08.377 "pending_data_buffer": 0, 00:13:08.377 "devices": [ 00:13:08.377 { 00:13:08.377 "name": "mlx5_0", 00:13:08.377 "polls": 2705600, 00:13:08.377 "idle_polls": 2705218, 00:13:08.377 "completions": 444, 00:13:08.377 "requests": 222, 00:13:08.377 "request_latency": 44926752, 00:13:08.377 "pending_free_request": 0, 00:13:08.377 "pending_rdma_read": 0, 00:13:08.377 "pending_rdma_write": 0, 00:13:08.377 "pending_rdma_send": 0, 00:13:08.377 "total_send_wrs": 390, 00:13:08.377 "send_doorbell_updates": 185, 00:13:08.377 "total_recv_wrs": 4318, 00:13:08.377 "recv_doorbell_updates": 186 00:13:08.377 }, 00:13:08.377 { 00:13:08.377 "name": "mlx5_1", 00:13:08.377 "polls": 2705600, 00:13:08.377 "idle_polls": 2705600, 00:13:08.377 "completions": 0, 00:13:08.377 "requests": 0, 00:13:08.377 "request_latency": 0, 00:13:08.377 "pending_free_request": 0, 00:13:08.377 "pending_rdma_read": 0, 00:13:08.377 "pending_rdma_write": 0, 00:13:08.377 "pending_rdma_send": 0, 00:13:08.377 "total_send_wrs": 0, 00:13:08.377 "send_doorbell_updates": 0, 00:13:08.377 "total_recv_wrs": 4096, 00:13:08.377 "recv_doorbell_updates": 1 00:13:08.377 } 00:13:08.377 ] 00:13:08.377 } 00:13:08.377 ] 00:13:08.377 } 00:13:08.377 ] 00:13:08.377 }' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@117 -- # (( 1290 > 0 )) 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@118 -- # (( 115178590 > 0 )) 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:08.377 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:08.635 rmmod nvme_rdma 00:13:08.635 rmmod nvme_fabrics 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2503319 ']' 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2503319 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2503319 ']' 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2503319 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2503319 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:08.635 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:08.636 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2503319' 00:13:08.636 killing process with pid 2503319 00:13:08.636 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2503319 00:13:08.636 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2503319 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:08.895 00:13:08.895 real 0m36.040s 00:13:08.895 user 2m2.064s 00:13:08.895 sys 0m5.603s 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:08.895 ************************************ 00:13:08.895 END TEST nvmf_rpc 00:13:08.895 ************************************ 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:08.895 ************************************ 00:13:08.895 START TEST nvmf_invalid 00:13:08.895 ************************************ 00:13:08.895 10:01:53 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:13:09.155 * Looking for test storage... 00:13:09.155 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:09.155 10:01:54 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:14.430 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:14.431 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:14.431 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:14.431 Found net devices under 0000:da:00.0: mlx_0_0 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:14.431 Found net devices under 0000:da:00.1: mlx_0_1 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@420 -- # rdma_device_init 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # uname 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:14.431 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:14.690 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:14.690 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:14.690 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:14.690 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:14.690 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:14.691 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.691 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:14.691 altname enp218s0f0np0 00:13:14.691 altname ens818f0np0 00:13:14.691 inet 192.168.100.8/24 scope global mlx_0_0 00:13:14.691 valid_lft forever preferred_lft forever 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:14.691 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:14.691 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:14.691 altname enp218s0f1np1 00:13:14.691 altname ens818f1np1 00:13:14.691 inet 192.168.100.9/24 scope global mlx_0_1 00:13:14.691 valid_lft forever preferred_lft forever 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # continue 2 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:14.691 192.168.100.9' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:14.691 192.168.100.9' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # head -n 1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:14.691 192.168.100.9' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # tail -n +2 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # head -n 1 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2511625 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2511625 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2511625 ']' 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.691 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.692 10:01:59 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:14.692 [2024-07-25 10:01:59.822117] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:14.692 [2024-07-25 10:01:59.822180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.692 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.950 [2024-07-25 10:01:59.889932] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:14.951 [2024-07-25 10:01:59.968968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.951 [2024-07-25 10:01:59.969003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.951 [2024-07-25 10:01:59.969010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.951 [2024-07-25 10:01:59.969019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.951 [2024-07-25 10:01:59.969040] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.951 [2024-07-25 10:01:59.969086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.951 [2024-07-25 10:01:59.969116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.951 [2024-07-25 10:01:59.969235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.951 [2024-07-25 10:01:59.969235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.517 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:15.517 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:15.517 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:15.517 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.517 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode19350 00:13:15.776 [2024-07-25 10:02:00.836958] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:15.776 { 00:13:15.776 "nqn": "nqn.2016-06.io.spdk:cnode19350", 00:13:15.776 "tgt_name": "foobar", 00:13:15.776 "method": "nvmf_create_subsystem", 00:13:15.776 "req_id": 1 00:13:15.776 } 00:13:15.776 Got JSON-RPC error response 00:13:15.776 response: 00:13:15.776 { 00:13:15.776 "code": -32603, 00:13:15.776 "message": "Unable to find target foobar" 00:13:15.776 }' 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:15.776 { 00:13:15.776 "nqn": "nqn.2016-06.io.spdk:cnode19350", 00:13:15.776 "tgt_name": "foobar", 00:13:15.776 "method": "nvmf_create_subsystem", 00:13:15.776 "req_id": 1 00:13:15.776 } 00:13:15.776 Got JSON-RPC error response 00:13:15.776 response: 00:13:15.776 { 00:13:15.776 "code": -32603, 00:13:15.776 "message": "Unable to find target foobar" 00:13:15.776 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:15.776 10:02:00 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode25794 00:13:16.033 [2024-07-25 10:02:01.025625] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25794: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:16.033 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:16.033 { 00:13:16.033 "nqn": "nqn.2016-06.io.spdk:cnode25794", 00:13:16.033 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.033 "method": "nvmf_create_subsystem", 00:13:16.033 "req_id": 1 00:13:16.033 } 00:13:16.033 Got JSON-RPC error response 00:13:16.033 response: 00:13:16.033 { 00:13:16.033 "code": -32602, 00:13:16.033 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.033 }' 00:13:16.033 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:16.033 { 00:13:16.033 "nqn": "nqn.2016-06.io.spdk:cnode25794", 00:13:16.033 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:16.033 "method": "nvmf_create_subsystem", 00:13:16.033 "req_id": 1 00:13:16.033 } 00:13:16.033 Got JSON-RPC error response 00:13:16.033 response: 00:13:16.033 { 00:13:16.033 "code": -32602, 00:13:16.033 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:16.033 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.033 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:16.033 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode22389 00:13:16.291 [2024-07-25 10:02:01.206177] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22389: invalid model number 'SPDK_Controller' 00:13:16.291 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:16.291 { 00:13:16.291 "nqn": "nqn.2016-06.io.spdk:cnode22389", 00:13:16.291 "model_number": "SPDK_Controller\u001f", 00:13:16.291 "method": "nvmf_create_subsystem", 00:13:16.291 "req_id": 1 00:13:16.291 } 00:13:16.291 Got JSON-RPC error response 00:13:16.291 response: 00:13:16.291 { 00:13:16.291 "code": -32602, 00:13:16.291 "message": "Invalid MN SPDK_Controller\u001f" 00:13:16.291 }' 00:13:16.291 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:16.291 { 00:13:16.291 "nqn": "nqn.2016-06.io.spdk:cnode22389", 00:13:16.291 "model_number": "SPDK_Controller\u001f", 00:13:16.291 "method": "nvmf_create_subsystem", 00:13:16.291 "req_id": 1 00:13:16.291 } 00:13:16.291 Got JSON-RPC error response 00:13:16.291 response: 00:13:16.291 { 00:13:16.291 "code": -32602, 00:13:16.291 "message": "Invalid MN SPDK_Controller\u001f" 00:13:16.291 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:16.291 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:16.291 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:16.291 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.292 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.293 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:13:16.293 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'GOgK\"W'\''9kJdZ.vaM4[LL' 00:13:16.293 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'GOgK\"W'\''9kJdZ.vaM4[LL' nqn.2016-06.io.spdk:cnode28212 00:13:16.551 [2024-07-25 10:02:01.519217] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28212: invalid serial number 'GOgK\"W'9kJdZ.vaM4[LL' 00:13:16.551 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:16.551 { 00:13:16.551 "nqn": "nqn.2016-06.io.spdk:cnode28212", 00:13:16.551 "serial_number": "GOgK\\\"W'\''9kJdZ.vaM4[LL", 00:13:16.551 "method": "nvmf_create_subsystem", 00:13:16.551 "req_id": 1 00:13:16.551 } 00:13:16.551 Got JSON-RPC error response 00:13:16.551 response: 00:13:16.551 { 00:13:16.551 "code": -32602, 00:13:16.551 "message": "Invalid SN GOgK\\\"W'\''9kJdZ.vaM4[LL" 00:13:16.551 }' 00:13:16.551 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:16.552 { 00:13:16.552 "nqn": "nqn.2016-06.io.spdk:cnode28212", 00:13:16.552 "serial_number": "GOgK\\\"W'9kJdZ.vaM4[LL", 00:13:16.552 "method": "nvmf_create_subsystem", 00:13:16.552 "req_id": 1 00:13:16.552 } 00:13:16.552 Got JSON-RPC error response 00:13:16.552 response: 00:13:16.552 { 00:13:16.552 "code": -32602, 00:13:16.552 "message": "Invalid SN GOgK\\\"W'9kJdZ.vaM4[LL" 00:13:16.552 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:16.552 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:16.553 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:16.811 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\zi' 00:13:16.812 10:02:01 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\zi' nqn.2016-06.io.spdk:cnode7376 00:13:17.070 [2024-07-25 10:02:01.980762] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7376: invalid model number 'ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\zi' 00:13:17.071 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:17.071 { 00:13:17.071 "nqn": "nqn.2016-06.io.spdk:cnode7376", 00:13:17.071 "model_number": "ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\\zi", 00:13:17.071 "method": "nvmf_create_subsystem", 00:13:17.071 "req_id": 1 00:13:17.071 } 00:13:17.071 Got JSON-RPC error response 00:13:17.071 response: 00:13:17.071 { 00:13:17.071 "code": -32602, 00:13:17.071 "message": "Invalid MN ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\\zi" 00:13:17.071 }' 00:13:17.071 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:17.071 { 00:13:17.071 "nqn": "nqn.2016-06.io.spdk:cnode7376", 00:13:17.071 "model_number": "ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\\zi", 00:13:17.071 "method": "nvmf_create_subsystem", 00:13:17.071 "req_id": 1 00:13:17.071 } 00:13:17.071 Got JSON-RPC error response 00:13:17.071 response: 00:13:17.071 { 00:13:17.071 "code": -32602, 00:13:17.071 "message": "Invalid MN ubz9^H,2B5O0xHS)[0Chkm>GHH!d5fle!F]+KJ\\zi" 00:13:17.071 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:17.071 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:13:17.071 [2024-07-25 10:02:02.182739] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1edd560/0x1ee1a50) succeed. 00:13:17.071 [2024-07-25 10:02:02.192813] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1edeba0/0x1f230e0) succeed. 00:13:17.337 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '192.168.100.8 00:13:17.634 192.168.100.9' 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=192.168.100.8 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:13:17.634 [2024-07-25 10:02:02.680382] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:17.634 { 00:13:17.634 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.634 "listen_address": { 00:13:17.634 "trtype": "rdma", 00:13:17.634 "traddr": "192.168.100.8", 00:13:17.634 "trsvcid": "4421" 00:13:17.634 }, 00:13:17.634 "method": "nvmf_subsystem_remove_listener", 00:13:17.634 "req_id": 1 00:13:17.634 } 00:13:17.634 Got JSON-RPC error response 00:13:17.634 response: 00:13:17.634 { 00:13:17.634 "code": -32602, 00:13:17.634 "message": "Invalid parameters" 00:13:17.634 }' 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:17.634 { 00:13:17.634 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:17.634 "listen_address": { 00:13:17.634 "trtype": "rdma", 00:13:17.634 "traddr": "192.168.100.8", 00:13:17.634 "trsvcid": "4421" 00:13:17.634 }, 00:13:17.634 "method": "nvmf_subsystem_remove_listener", 00:13:17.634 "req_id": 1 00:13:17.634 } 00:13:17.634 Got JSON-RPC error response 00:13:17.634 response: 00:13:17.634 { 00:13:17.634 "code": -32602, 00:13:17.634 "message": "Invalid parameters" 00:13:17.634 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:17.634 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13774 -i 0 00:13:17.892 [2024-07-25 10:02:02.856937] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13774: invalid cntlid range [0-65519] 00:13:17.892 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:17.892 { 00:13:17.892 "nqn": "nqn.2016-06.io.spdk:cnode13774", 00:13:17.892 "min_cntlid": 0, 00:13:17.892 "method": "nvmf_create_subsystem", 00:13:17.892 "req_id": 1 00:13:17.892 } 00:13:17.892 Got JSON-RPC error response 00:13:17.892 response: 00:13:17.892 { 00:13:17.892 "code": -32602, 00:13:17.892 "message": "Invalid cntlid range [0-65519]" 00:13:17.892 }' 00:13:17.892 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:17.892 { 00:13:17.892 "nqn": "nqn.2016-06.io.spdk:cnode13774", 00:13:17.892 "min_cntlid": 0, 00:13:17.892 "method": "nvmf_create_subsystem", 00:13:17.892 "req_id": 1 00:13:17.892 } 00:13:17.892 Got JSON-RPC error response 00:13:17.892 response: 00:13:17.892 { 00:13:17.892 "code": -32602, 00:13:17.892 "message": "Invalid cntlid range [0-65519]" 00:13:17.892 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:17.892 10:02:02 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15612 -i 65520 00:13:17.892 [2024-07-25 10:02:03.041558] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15612: invalid cntlid range [65520-65519] 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:18.151 { 00:13:18.151 "nqn": "nqn.2016-06.io.spdk:cnode15612", 00:13:18.151 "min_cntlid": 65520, 00:13:18.151 "method": "nvmf_create_subsystem", 00:13:18.151 "req_id": 1 00:13:18.151 } 00:13:18.151 Got JSON-RPC error response 00:13:18.151 response: 00:13:18.151 { 00:13:18.151 "code": -32602, 00:13:18.151 "message": "Invalid cntlid range [65520-65519]" 00:13:18.151 }' 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:18.151 { 00:13:18.151 "nqn": "nqn.2016-06.io.spdk:cnode15612", 00:13:18.151 "min_cntlid": 65520, 00:13:18.151 "method": "nvmf_create_subsystem", 00:13:18.151 "req_id": 1 00:13:18.151 } 00:13:18.151 Got JSON-RPC error response 00:13:18.151 response: 00:13:18.151 { 00:13:18.151 "code": -32602, 00:13:18.151 "message": "Invalid cntlid range [65520-65519]" 00:13:18.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3620 -I 0 00:13:18.151 [2024-07-25 10:02:03.230235] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3620: invalid cntlid range [1-0] 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:18.151 { 00:13:18.151 "nqn": "nqn.2016-06.io.spdk:cnode3620", 00:13:18.151 "max_cntlid": 0, 00:13:18.151 "method": "nvmf_create_subsystem", 00:13:18.151 "req_id": 1 00:13:18.151 } 00:13:18.151 Got JSON-RPC error response 00:13:18.151 response: 00:13:18.151 { 00:13:18.151 "code": -32602, 00:13:18.151 "message": "Invalid cntlid range [1-0]" 00:13:18.151 }' 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:18.151 { 00:13:18.151 "nqn": "nqn.2016-06.io.spdk:cnode3620", 00:13:18.151 "max_cntlid": 0, 00:13:18.151 "method": "nvmf_create_subsystem", 00:13:18.151 "req_id": 1 00:13:18.151 } 00:13:18.151 Got JSON-RPC error response 00:13:18.151 response: 00:13:18.151 { 00:13:18.151 "code": -32602, 00:13:18.151 "message": "Invalid cntlid range [1-0]" 00:13:18.151 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.151 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27878 -I 65520 00:13:18.410 [2024-07-25 10:02:03.426946] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27878: invalid cntlid range [1-65520] 00:13:18.410 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:18.410 { 00:13:18.410 "nqn": "nqn.2016-06.io.spdk:cnode27878", 00:13:18.410 "max_cntlid": 65520, 00:13:18.410 "method": "nvmf_create_subsystem", 00:13:18.410 "req_id": 1 00:13:18.410 } 00:13:18.410 Got JSON-RPC error response 00:13:18.410 response: 00:13:18.410 { 00:13:18.410 "code": -32602, 00:13:18.410 "message": "Invalid cntlid range [1-65520]" 00:13:18.410 }' 00:13:18.410 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:18.410 { 00:13:18.410 "nqn": "nqn.2016-06.io.spdk:cnode27878", 00:13:18.410 "max_cntlid": 65520, 00:13:18.410 "method": "nvmf_create_subsystem", 00:13:18.410 "req_id": 1 00:13:18.410 } 00:13:18.410 Got JSON-RPC error response 00:13:18.410 response: 00:13:18.410 { 00:13:18.410 "code": -32602, 00:13:18.410 "message": "Invalid cntlid range [1-65520]" 00:13:18.410 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.410 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25603 -i 6 -I 5 00:13:18.669 [2024-07-25 10:02:03.607611] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25603: invalid cntlid range [6-5] 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:18.669 { 00:13:18.669 "nqn": "nqn.2016-06.io.spdk:cnode25603", 00:13:18.669 "min_cntlid": 6, 00:13:18.669 "max_cntlid": 5, 00:13:18.669 "method": "nvmf_create_subsystem", 00:13:18.669 "req_id": 1 00:13:18.669 } 00:13:18.669 Got JSON-RPC error response 00:13:18.669 response: 00:13:18.669 { 00:13:18.669 "code": -32602, 00:13:18.669 "message": "Invalid cntlid range [6-5]" 00:13:18.669 }' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:18.669 { 00:13:18.669 "nqn": "nqn.2016-06.io.spdk:cnode25603", 00:13:18.669 "min_cntlid": 6, 00:13:18.669 "max_cntlid": 5, 00:13:18.669 "method": "nvmf_create_subsystem", 00:13:18.669 "req_id": 1 00:13:18.669 } 00:13:18.669 Got JSON-RPC error response 00:13:18.669 response: 00:13:18.669 { 00:13:18.669 "code": -32602, 00:13:18.669 "message": "Invalid cntlid range [6-5]" 00:13:18.669 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:18.669 { 00:13:18.669 "name": "foobar", 00:13:18.669 "method": "nvmf_delete_target", 00:13:18.669 "req_id": 1 00:13:18.669 } 00:13:18.669 Got JSON-RPC error response 00:13:18.669 response: 00:13:18.669 { 00:13:18.669 "code": -32602, 00:13:18.669 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:18.669 }' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:18.669 { 00:13:18.669 "name": "foobar", 00:13:18.669 "method": "nvmf_delete_target", 00:13:18.669 "req_id": 1 00:13:18.669 } 00:13:18.669 Got JSON-RPC error response 00:13:18.669 response: 00:13:18.669 { 00:13:18.669 "code": -32602, 00:13:18.669 "message": "The specified target doesn't exist, cannot delete it." 00:13:18.669 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:18.669 rmmod nvme_rdma 00:13:18.669 rmmod nvme_fabrics 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2511625 ']' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2511625 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2511625 ']' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2511625 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:18.669 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2511625 00:13:18.927 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:18.927 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:18.927 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2511625' 00:13:18.927 killing process with pid 2511625 00:13:18.927 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2511625 00:13:18.927 10:02:03 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2511625 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:19.185 00:13:19.185 real 0m10.111s 00:13:19.185 user 0m20.253s 00:13:19.185 sys 0m5.251s 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:19.185 ************************************ 00:13:19.185 END TEST nvmf_invalid 00:13:19.185 ************************************ 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:19.185 ************************************ 00:13:19.185 START TEST nvmf_connect_stress 00:13:19.185 ************************************ 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:13:19.185 * Looking for test storage... 00:13:19.185 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.185 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.186 10:02:04 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:25.757 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:25.757 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:25.757 Found net devices under 0000:da:00.0: mlx_0_0 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:25.757 Found net devices under 0000:da:00.1: mlx_0_1 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@420 -- # rdma_device_init 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # uname 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:25.757 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:25.758 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.758 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:25.758 altname enp218s0f0np0 00:13:25.758 altname ens818f0np0 00:13:25.758 inet 192.168.100.8/24 scope global mlx_0_0 00:13:25.758 valid_lft forever preferred_lft forever 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:25.758 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:25.758 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:25.758 altname enp218s0f1np1 00:13:25.758 altname ens818f1np1 00:13:25.758 inet 192.168.100.9/24 scope global mlx_0_1 00:13:25.758 valid_lft forever preferred_lft forever 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # continue 2 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:25.758 192.168.100.9' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:25.758 192.168.100.9' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # head -n 1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:25.758 192.168.100.9' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # tail -n +2 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # head -n 1 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:25.758 10:02:09 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.758 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2515555 00:13:25.758 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2515555 00:13:25.758 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.758 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2515555 ']' 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.759 [2024-07-25 10:02:10.048717] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:25.759 [2024-07-25 10:02:10.048766] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.759 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.759 [2024-07-25 10:02:10.117629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.759 [2024-07-25 10:02:10.196242] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.759 [2024-07-25 10:02:10.196279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.759 [2024-07-25 10:02:10.196286] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.759 [2024-07-25 10:02:10.196292] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.759 [2024-07-25 10:02:10.196316] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.759 [2024-07-25 10:02:10.196428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.759 [2024-07-25 10:02:10.196453] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.759 [2024-07-25 10:02:10.196455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.759 10:02:10 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 [2024-07-25 10:02:10.922901] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1ca0200/0x1ca46f0) succeed. 00:13:26.018 [2024-07-25 10:02:10.931793] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1ca17a0/0x1ce5d80) succeed. 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 [2024-07-25 10:02:11.037014] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.018 NULL1 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2515800 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 EAL: No free 2048 kB hugepages reported on node 1 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.018 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.019 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.585 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.585 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:26.585 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.585 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.585 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.844 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.844 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:26.844 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.844 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.844 10:02:11 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.102 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.102 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:27.102 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.102 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.102 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.361 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.361 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:27.361 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.361 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.361 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.619 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.619 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:27.619 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.619 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.619 10:02:12 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.186 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.186 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:28.186 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.186 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.186 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.443 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.443 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:28.443 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.443 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.443 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.701 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.701 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:28.701 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.701 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.701 10:02:13 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.960 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.960 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:28.960 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.960 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.960 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.525 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.525 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:29.525 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.525 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.525 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.783 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.783 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:29.783 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.783 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.783 10:02:14 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.041 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.041 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:30.041 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.041 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.041 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.299 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.299 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:30.299 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.299 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.299 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.558 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.558 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:30.558 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.558 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.558 10:02:15 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.125 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.125 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:31.125 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.125 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.125 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.383 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.383 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:31.383 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.383 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.383 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.641 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.641 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:31.641 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.641 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.641 10:02:16 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.898 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.898 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:31.898 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.899 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.899 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.465 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.465 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:32.465 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.465 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.465 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.724 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.724 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:32.724 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.724 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.724 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.982 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.982 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:32.982 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.982 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.982 10:02:17 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.240 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.240 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:33.240 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.240 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.240 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.498 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.498 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:33.498 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.498 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.498 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.064 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.064 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:34.064 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.064 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.064 10:02:18 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.322 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.322 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:34.322 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.322 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.322 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.580 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.580 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:34.580 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.580 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.580 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.837 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.837 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:34.837 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.837 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.837 10:02:19 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.462 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.026 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.026 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:36.026 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.026 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.026 10:02:20 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.026 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:13:36.284 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.284 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2515800 00:13:36.284 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2515800) - No such process 00:13:36.284 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2515800 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:36.285 rmmod nvme_rdma 00:13:36.285 rmmod nvme_fabrics 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2515555 ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2515555 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2515555 ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2515555 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2515555 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2515555' 00:13:36.285 killing process with pid 2515555 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2515555 00:13:36.285 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2515555 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:36.544 00:13:36.544 real 0m17.436s 00:13:36.544 user 0m42.039s 00:13:36.544 sys 0m5.949s 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.544 ************************************ 00:13:36.544 END TEST nvmf_connect_stress 00:13:36.544 ************************************ 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:36.544 ************************************ 00:13:36.544 START TEST nvmf_fused_ordering 00:13:36.544 ************************************ 00:13:36.544 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:13:36.802 * Looking for test storage... 00:13:36.802 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:36.802 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:36.803 10:02:21 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:43.372 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:43.372 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:43.372 Found net devices under 0000:da:00.0: mlx_0_0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:43.372 Found net devices under 0000:da:00.1: mlx_0_1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@420 -- # rdma_device_init 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # uname 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:43.372 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:43.372 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:43.372 altname enp218s0f0np0 00:13:43.372 altname ens818f0np0 00:13:43.372 inet 192.168.100.8/24 scope global mlx_0_0 00:13:43.372 valid_lft forever preferred_lft forever 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:43.372 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:43.372 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:43.372 altname enp218s0f1np1 00:13:43.372 altname ens818f1np1 00:13:43.372 inet 192.168.100.9/24 scope global mlx_0_1 00:13:43.372 valid_lft forever preferred_lft forever 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:43.372 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # continue 2 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:43.373 192.168.100.9' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:43.373 192.168.100.9' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # head -n 1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:43.373 192.168.100.9' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # tail -n +2 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # head -n 1 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2520708 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2520708 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2520708 ']' 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:43.373 10:02:27 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 [2024-07-25 10:02:27.536324] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:43.373 [2024-07-25 10:02:27.536364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.373 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.373 [2024-07-25 10:02:27.588444] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.373 [2024-07-25 10:02:27.663840] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.373 [2024-07-25 10:02:27.663879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.373 [2024-07-25 10:02:27.663886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.373 [2024-07-25 10:02:27.663892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.373 [2024-07-25 10:02:27.663897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.373 [2024-07-25 10:02:27.663919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 [2024-07-25 10:02:28.394951] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x105dc20/0x1062110) succeed. 00:13:43.373 [2024-07-25 10:02:28.404110] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x105f120/0x10a37a0) succeed. 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 [2024-07-25 10:02:28.466589] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 NULL1 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.373 10:02:28 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:43.373 [2024-07-25 10:02:28.519598] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:43.373 [2024-07-25 10:02:28.519629] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520810 ] 00:13:43.632 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.632 Attached to nqn.2016-06.io.spdk:cnode1 00:13:43.632 Namespace ID: 1 size: 1GB 00:13:43.632 fused_ordering(0) 00:13:43.632 fused_ordering(1) 00:13:43.632 fused_ordering(2) 00:13:43.632 fused_ordering(3) 00:13:43.632 fused_ordering(4) 00:13:43.632 fused_ordering(5) 00:13:43.632 fused_ordering(6) 00:13:43.632 fused_ordering(7) 00:13:43.632 fused_ordering(8) 00:13:43.632 fused_ordering(9) 00:13:43.632 fused_ordering(10) 00:13:43.632 fused_ordering(11) 00:13:43.632 fused_ordering(12) 00:13:43.632 fused_ordering(13) 00:13:43.632 fused_ordering(14) 00:13:43.632 fused_ordering(15) 00:13:43.632 fused_ordering(16) 00:13:43.632 fused_ordering(17) 00:13:43.632 fused_ordering(18) 00:13:43.632 fused_ordering(19) 00:13:43.632 fused_ordering(20) 00:13:43.632 fused_ordering(21) 00:13:43.632 fused_ordering(22) 00:13:43.632 fused_ordering(23) 00:13:43.632 fused_ordering(24) 00:13:43.632 fused_ordering(25) 00:13:43.632 fused_ordering(26) 00:13:43.632 fused_ordering(27) 00:13:43.632 fused_ordering(28) 00:13:43.632 fused_ordering(29) 00:13:43.632 fused_ordering(30) 00:13:43.632 fused_ordering(31) 00:13:43.632 fused_ordering(32) 00:13:43.632 fused_ordering(33) 00:13:43.632 fused_ordering(34) 00:13:43.632 fused_ordering(35) 00:13:43.632 fused_ordering(36) 00:13:43.632 fused_ordering(37) 00:13:43.632 fused_ordering(38) 00:13:43.632 fused_ordering(39) 00:13:43.632 fused_ordering(40) 00:13:43.632 fused_ordering(41) 00:13:43.632 fused_ordering(42) 00:13:43.632 fused_ordering(43) 00:13:43.632 fused_ordering(44) 00:13:43.632 fused_ordering(45) 00:13:43.632 fused_ordering(46) 00:13:43.632 fused_ordering(47) 00:13:43.632 fused_ordering(48) 00:13:43.632 fused_ordering(49) 00:13:43.632 fused_ordering(50) 00:13:43.632 fused_ordering(51) 00:13:43.632 fused_ordering(52) 00:13:43.632 fused_ordering(53) 00:13:43.632 fused_ordering(54) 00:13:43.632 fused_ordering(55) 00:13:43.632 fused_ordering(56) 00:13:43.632 fused_ordering(57) 00:13:43.632 fused_ordering(58) 00:13:43.632 fused_ordering(59) 00:13:43.632 fused_ordering(60) 00:13:43.632 fused_ordering(61) 00:13:43.632 fused_ordering(62) 00:13:43.632 fused_ordering(63) 00:13:43.632 fused_ordering(64) 00:13:43.632 fused_ordering(65) 00:13:43.632 fused_ordering(66) 00:13:43.632 fused_ordering(67) 00:13:43.632 fused_ordering(68) 00:13:43.633 fused_ordering(69) 00:13:43.633 fused_ordering(70) 00:13:43.633 fused_ordering(71) 00:13:43.633 fused_ordering(72) 00:13:43.633 fused_ordering(73) 00:13:43.633 fused_ordering(74) 00:13:43.633 fused_ordering(75) 00:13:43.633 fused_ordering(76) 00:13:43.633 fused_ordering(77) 00:13:43.633 fused_ordering(78) 00:13:43.633 fused_ordering(79) 00:13:43.633 fused_ordering(80) 00:13:43.633 fused_ordering(81) 00:13:43.633 fused_ordering(82) 00:13:43.633 fused_ordering(83) 00:13:43.633 fused_ordering(84) 00:13:43.633 fused_ordering(85) 00:13:43.633 fused_ordering(86) 00:13:43.633 fused_ordering(87) 00:13:43.633 fused_ordering(88) 00:13:43.633 fused_ordering(89) 00:13:43.633 fused_ordering(90) 00:13:43.633 fused_ordering(91) 00:13:43.633 fused_ordering(92) 00:13:43.633 fused_ordering(93) 00:13:43.633 fused_ordering(94) 00:13:43.633 fused_ordering(95) 00:13:43.633 fused_ordering(96) 00:13:43.633 fused_ordering(97) 00:13:43.633 fused_ordering(98) 00:13:43.633 fused_ordering(99) 00:13:43.633 fused_ordering(100) 00:13:43.633 fused_ordering(101) 00:13:43.633 fused_ordering(102) 00:13:43.633 fused_ordering(103) 00:13:43.633 fused_ordering(104) 00:13:43.633 fused_ordering(105) 00:13:43.633 fused_ordering(106) 00:13:43.633 fused_ordering(107) 00:13:43.633 fused_ordering(108) 00:13:43.633 fused_ordering(109) 00:13:43.633 fused_ordering(110) 00:13:43.633 fused_ordering(111) 00:13:43.633 fused_ordering(112) 00:13:43.633 fused_ordering(113) 00:13:43.633 fused_ordering(114) 00:13:43.633 fused_ordering(115) 00:13:43.633 fused_ordering(116) 00:13:43.633 fused_ordering(117) 00:13:43.633 fused_ordering(118) 00:13:43.633 fused_ordering(119) 00:13:43.633 fused_ordering(120) 00:13:43.633 fused_ordering(121) 00:13:43.633 fused_ordering(122) 00:13:43.633 fused_ordering(123) 00:13:43.633 fused_ordering(124) 00:13:43.633 fused_ordering(125) 00:13:43.633 fused_ordering(126) 00:13:43.633 fused_ordering(127) 00:13:43.633 fused_ordering(128) 00:13:43.633 fused_ordering(129) 00:13:43.633 fused_ordering(130) 00:13:43.633 fused_ordering(131) 00:13:43.633 fused_ordering(132) 00:13:43.633 fused_ordering(133) 00:13:43.633 fused_ordering(134) 00:13:43.633 fused_ordering(135) 00:13:43.633 fused_ordering(136) 00:13:43.633 fused_ordering(137) 00:13:43.633 fused_ordering(138) 00:13:43.633 fused_ordering(139) 00:13:43.633 fused_ordering(140) 00:13:43.633 fused_ordering(141) 00:13:43.633 fused_ordering(142) 00:13:43.633 fused_ordering(143) 00:13:43.633 fused_ordering(144) 00:13:43.633 fused_ordering(145) 00:13:43.633 fused_ordering(146) 00:13:43.633 fused_ordering(147) 00:13:43.633 fused_ordering(148) 00:13:43.633 fused_ordering(149) 00:13:43.633 fused_ordering(150) 00:13:43.633 fused_ordering(151) 00:13:43.633 fused_ordering(152) 00:13:43.633 fused_ordering(153) 00:13:43.633 fused_ordering(154) 00:13:43.633 fused_ordering(155) 00:13:43.633 fused_ordering(156) 00:13:43.633 fused_ordering(157) 00:13:43.633 fused_ordering(158) 00:13:43.633 fused_ordering(159) 00:13:43.633 fused_ordering(160) 00:13:43.633 fused_ordering(161) 00:13:43.633 fused_ordering(162) 00:13:43.633 fused_ordering(163) 00:13:43.633 fused_ordering(164) 00:13:43.633 fused_ordering(165) 00:13:43.633 fused_ordering(166) 00:13:43.633 fused_ordering(167) 00:13:43.633 fused_ordering(168) 00:13:43.633 fused_ordering(169) 00:13:43.633 fused_ordering(170) 00:13:43.633 fused_ordering(171) 00:13:43.633 fused_ordering(172) 00:13:43.633 fused_ordering(173) 00:13:43.633 fused_ordering(174) 00:13:43.633 fused_ordering(175) 00:13:43.633 fused_ordering(176) 00:13:43.633 fused_ordering(177) 00:13:43.633 fused_ordering(178) 00:13:43.633 fused_ordering(179) 00:13:43.633 fused_ordering(180) 00:13:43.633 fused_ordering(181) 00:13:43.633 fused_ordering(182) 00:13:43.633 fused_ordering(183) 00:13:43.633 fused_ordering(184) 00:13:43.633 fused_ordering(185) 00:13:43.633 fused_ordering(186) 00:13:43.633 fused_ordering(187) 00:13:43.633 fused_ordering(188) 00:13:43.633 fused_ordering(189) 00:13:43.633 fused_ordering(190) 00:13:43.633 fused_ordering(191) 00:13:43.633 fused_ordering(192) 00:13:43.633 fused_ordering(193) 00:13:43.633 fused_ordering(194) 00:13:43.633 fused_ordering(195) 00:13:43.633 fused_ordering(196) 00:13:43.633 fused_ordering(197) 00:13:43.633 fused_ordering(198) 00:13:43.633 fused_ordering(199) 00:13:43.633 fused_ordering(200) 00:13:43.633 fused_ordering(201) 00:13:43.633 fused_ordering(202) 00:13:43.633 fused_ordering(203) 00:13:43.633 fused_ordering(204) 00:13:43.633 fused_ordering(205) 00:13:43.633 fused_ordering(206) 00:13:43.633 fused_ordering(207) 00:13:43.633 fused_ordering(208) 00:13:43.633 fused_ordering(209) 00:13:43.633 fused_ordering(210) 00:13:43.633 fused_ordering(211) 00:13:43.633 fused_ordering(212) 00:13:43.633 fused_ordering(213) 00:13:43.633 fused_ordering(214) 00:13:43.633 fused_ordering(215) 00:13:43.633 fused_ordering(216) 00:13:43.633 fused_ordering(217) 00:13:43.633 fused_ordering(218) 00:13:43.633 fused_ordering(219) 00:13:43.633 fused_ordering(220) 00:13:43.633 fused_ordering(221) 00:13:43.633 fused_ordering(222) 00:13:43.633 fused_ordering(223) 00:13:43.633 fused_ordering(224) 00:13:43.633 fused_ordering(225) 00:13:43.633 fused_ordering(226) 00:13:43.633 fused_ordering(227) 00:13:43.633 fused_ordering(228) 00:13:43.633 fused_ordering(229) 00:13:43.633 fused_ordering(230) 00:13:43.633 fused_ordering(231) 00:13:43.633 fused_ordering(232) 00:13:43.633 fused_ordering(233) 00:13:43.633 fused_ordering(234) 00:13:43.633 fused_ordering(235) 00:13:43.633 fused_ordering(236) 00:13:43.633 fused_ordering(237) 00:13:43.633 fused_ordering(238) 00:13:43.633 fused_ordering(239) 00:13:43.633 fused_ordering(240) 00:13:43.633 fused_ordering(241) 00:13:43.633 fused_ordering(242) 00:13:43.633 fused_ordering(243) 00:13:43.633 fused_ordering(244) 00:13:43.633 fused_ordering(245) 00:13:43.633 fused_ordering(246) 00:13:43.633 fused_ordering(247) 00:13:43.633 fused_ordering(248) 00:13:43.633 fused_ordering(249) 00:13:43.633 fused_ordering(250) 00:13:43.633 fused_ordering(251) 00:13:43.633 fused_ordering(252) 00:13:43.633 fused_ordering(253) 00:13:43.633 fused_ordering(254) 00:13:43.633 fused_ordering(255) 00:13:43.633 fused_ordering(256) 00:13:43.633 fused_ordering(257) 00:13:43.633 fused_ordering(258) 00:13:43.633 fused_ordering(259) 00:13:43.633 fused_ordering(260) 00:13:43.633 fused_ordering(261) 00:13:43.633 fused_ordering(262) 00:13:43.633 fused_ordering(263) 00:13:43.633 fused_ordering(264) 00:13:43.633 fused_ordering(265) 00:13:43.633 fused_ordering(266) 00:13:43.633 fused_ordering(267) 00:13:43.633 fused_ordering(268) 00:13:43.633 fused_ordering(269) 00:13:43.633 fused_ordering(270) 00:13:43.633 fused_ordering(271) 00:13:43.633 fused_ordering(272) 00:13:43.633 fused_ordering(273) 00:13:43.633 fused_ordering(274) 00:13:43.633 fused_ordering(275) 00:13:43.633 fused_ordering(276) 00:13:43.633 fused_ordering(277) 00:13:43.633 fused_ordering(278) 00:13:43.633 fused_ordering(279) 00:13:43.633 fused_ordering(280) 00:13:43.633 fused_ordering(281) 00:13:43.633 fused_ordering(282) 00:13:43.633 fused_ordering(283) 00:13:43.633 fused_ordering(284) 00:13:43.633 fused_ordering(285) 00:13:43.633 fused_ordering(286) 00:13:43.633 fused_ordering(287) 00:13:43.633 fused_ordering(288) 00:13:43.633 fused_ordering(289) 00:13:43.633 fused_ordering(290) 00:13:43.633 fused_ordering(291) 00:13:43.633 fused_ordering(292) 00:13:43.633 fused_ordering(293) 00:13:43.633 fused_ordering(294) 00:13:43.633 fused_ordering(295) 00:13:43.633 fused_ordering(296) 00:13:43.633 fused_ordering(297) 00:13:43.633 fused_ordering(298) 00:13:43.633 fused_ordering(299) 00:13:43.633 fused_ordering(300) 00:13:43.633 fused_ordering(301) 00:13:43.633 fused_ordering(302) 00:13:43.633 fused_ordering(303) 00:13:43.633 fused_ordering(304) 00:13:43.633 fused_ordering(305) 00:13:43.633 fused_ordering(306) 00:13:43.633 fused_ordering(307) 00:13:43.633 fused_ordering(308) 00:13:43.633 fused_ordering(309) 00:13:43.633 fused_ordering(310) 00:13:43.633 fused_ordering(311) 00:13:43.633 fused_ordering(312) 00:13:43.633 fused_ordering(313) 00:13:43.633 fused_ordering(314) 00:13:43.633 fused_ordering(315) 00:13:43.633 fused_ordering(316) 00:13:43.633 fused_ordering(317) 00:13:43.633 fused_ordering(318) 00:13:43.633 fused_ordering(319) 00:13:43.633 fused_ordering(320) 00:13:43.633 fused_ordering(321) 00:13:43.633 fused_ordering(322) 00:13:43.633 fused_ordering(323) 00:13:43.633 fused_ordering(324) 00:13:43.633 fused_ordering(325) 00:13:43.633 fused_ordering(326) 00:13:43.633 fused_ordering(327) 00:13:43.633 fused_ordering(328) 00:13:43.633 fused_ordering(329) 00:13:43.633 fused_ordering(330) 00:13:43.633 fused_ordering(331) 00:13:43.633 fused_ordering(332) 00:13:43.634 fused_ordering(333) 00:13:43.634 fused_ordering(334) 00:13:43.634 fused_ordering(335) 00:13:43.634 fused_ordering(336) 00:13:43.634 fused_ordering(337) 00:13:43.634 fused_ordering(338) 00:13:43.634 fused_ordering(339) 00:13:43.634 fused_ordering(340) 00:13:43.634 fused_ordering(341) 00:13:43.634 fused_ordering(342) 00:13:43.634 fused_ordering(343) 00:13:43.634 fused_ordering(344) 00:13:43.634 fused_ordering(345) 00:13:43.634 fused_ordering(346) 00:13:43.634 fused_ordering(347) 00:13:43.634 fused_ordering(348) 00:13:43.634 fused_ordering(349) 00:13:43.634 fused_ordering(350) 00:13:43.634 fused_ordering(351) 00:13:43.634 fused_ordering(352) 00:13:43.634 fused_ordering(353) 00:13:43.634 fused_ordering(354) 00:13:43.634 fused_ordering(355) 00:13:43.634 fused_ordering(356) 00:13:43.634 fused_ordering(357) 00:13:43.634 fused_ordering(358) 00:13:43.634 fused_ordering(359) 00:13:43.634 fused_ordering(360) 00:13:43.634 fused_ordering(361) 00:13:43.634 fused_ordering(362) 00:13:43.634 fused_ordering(363) 00:13:43.634 fused_ordering(364) 00:13:43.634 fused_ordering(365) 00:13:43.634 fused_ordering(366) 00:13:43.634 fused_ordering(367) 00:13:43.634 fused_ordering(368) 00:13:43.634 fused_ordering(369) 00:13:43.634 fused_ordering(370) 00:13:43.634 fused_ordering(371) 00:13:43.634 fused_ordering(372) 00:13:43.634 fused_ordering(373) 00:13:43.634 fused_ordering(374) 00:13:43.634 fused_ordering(375) 00:13:43.634 fused_ordering(376) 00:13:43.634 fused_ordering(377) 00:13:43.634 fused_ordering(378) 00:13:43.634 fused_ordering(379) 00:13:43.634 fused_ordering(380) 00:13:43.634 fused_ordering(381) 00:13:43.634 fused_ordering(382) 00:13:43.634 fused_ordering(383) 00:13:43.634 fused_ordering(384) 00:13:43.634 fused_ordering(385) 00:13:43.634 fused_ordering(386) 00:13:43.634 fused_ordering(387) 00:13:43.634 fused_ordering(388) 00:13:43.634 fused_ordering(389) 00:13:43.634 fused_ordering(390) 00:13:43.634 fused_ordering(391) 00:13:43.634 fused_ordering(392) 00:13:43.634 fused_ordering(393) 00:13:43.634 fused_ordering(394) 00:13:43.634 fused_ordering(395) 00:13:43.634 fused_ordering(396) 00:13:43.634 fused_ordering(397) 00:13:43.634 fused_ordering(398) 00:13:43.634 fused_ordering(399) 00:13:43.634 fused_ordering(400) 00:13:43.634 fused_ordering(401) 00:13:43.634 fused_ordering(402) 00:13:43.634 fused_ordering(403) 00:13:43.634 fused_ordering(404) 00:13:43.634 fused_ordering(405) 00:13:43.634 fused_ordering(406) 00:13:43.634 fused_ordering(407) 00:13:43.634 fused_ordering(408) 00:13:43.634 fused_ordering(409) 00:13:43.634 fused_ordering(410) 00:13:43.893 fused_ordering(411) 00:13:43.893 fused_ordering(412) 00:13:43.893 fused_ordering(413) 00:13:43.893 fused_ordering(414) 00:13:43.893 fused_ordering(415) 00:13:43.893 fused_ordering(416) 00:13:43.893 fused_ordering(417) 00:13:43.893 fused_ordering(418) 00:13:43.893 fused_ordering(419) 00:13:43.893 fused_ordering(420) 00:13:43.893 fused_ordering(421) 00:13:43.893 fused_ordering(422) 00:13:43.893 fused_ordering(423) 00:13:43.893 fused_ordering(424) 00:13:43.893 fused_ordering(425) 00:13:43.893 fused_ordering(426) 00:13:43.893 fused_ordering(427) 00:13:43.893 fused_ordering(428) 00:13:43.893 fused_ordering(429) 00:13:43.893 fused_ordering(430) 00:13:43.893 fused_ordering(431) 00:13:43.893 fused_ordering(432) 00:13:43.893 fused_ordering(433) 00:13:43.893 fused_ordering(434) 00:13:43.893 fused_ordering(435) 00:13:43.893 fused_ordering(436) 00:13:43.893 fused_ordering(437) 00:13:43.893 fused_ordering(438) 00:13:43.893 fused_ordering(439) 00:13:43.893 fused_ordering(440) 00:13:43.893 fused_ordering(441) 00:13:43.893 fused_ordering(442) 00:13:43.893 fused_ordering(443) 00:13:43.893 fused_ordering(444) 00:13:43.893 fused_ordering(445) 00:13:43.893 fused_ordering(446) 00:13:43.893 fused_ordering(447) 00:13:43.893 fused_ordering(448) 00:13:43.893 fused_ordering(449) 00:13:43.893 fused_ordering(450) 00:13:43.893 fused_ordering(451) 00:13:43.893 fused_ordering(452) 00:13:43.893 fused_ordering(453) 00:13:43.893 fused_ordering(454) 00:13:43.893 fused_ordering(455) 00:13:43.893 fused_ordering(456) 00:13:43.893 fused_ordering(457) 00:13:43.893 fused_ordering(458) 00:13:43.893 fused_ordering(459) 00:13:43.893 fused_ordering(460) 00:13:43.893 fused_ordering(461) 00:13:43.893 fused_ordering(462) 00:13:43.893 fused_ordering(463) 00:13:43.893 fused_ordering(464) 00:13:43.893 fused_ordering(465) 00:13:43.893 fused_ordering(466) 00:13:43.893 fused_ordering(467) 00:13:43.893 fused_ordering(468) 00:13:43.893 fused_ordering(469) 00:13:43.893 fused_ordering(470) 00:13:43.893 fused_ordering(471) 00:13:43.893 fused_ordering(472) 00:13:43.893 fused_ordering(473) 00:13:43.893 fused_ordering(474) 00:13:43.893 fused_ordering(475) 00:13:43.893 fused_ordering(476) 00:13:43.893 fused_ordering(477) 00:13:43.893 fused_ordering(478) 00:13:43.893 fused_ordering(479) 00:13:43.893 fused_ordering(480) 00:13:43.893 fused_ordering(481) 00:13:43.893 fused_ordering(482) 00:13:43.893 fused_ordering(483) 00:13:43.893 fused_ordering(484) 00:13:43.893 fused_ordering(485) 00:13:43.893 fused_ordering(486) 00:13:43.893 fused_ordering(487) 00:13:43.893 fused_ordering(488) 00:13:43.893 fused_ordering(489) 00:13:43.893 fused_ordering(490) 00:13:43.893 fused_ordering(491) 00:13:43.893 fused_ordering(492) 00:13:43.893 fused_ordering(493) 00:13:43.893 fused_ordering(494) 00:13:43.893 fused_ordering(495) 00:13:43.893 fused_ordering(496) 00:13:43.893 fused_ordering(497) 00:13:43.893 fused_ordering(498) 00:13:43.893 fused_ordering(499) 00:13:43.893 fused_ordering(500) 00:13:43.893 fused_ordering(501) 00:13:43.893 fused_ordering(502) 00:13:43.893 fused_ordering(503) 00:13:43.893 fused_ordering(504) 00:13:43.893 fused_ordering(505) 00:13:43.893 fused_ordering(506) 00:13:43.893 fused_ordering(507) 00:13:43.893 fused_ordering(508) 00:13:43.893 fused_ordering(509) 00:13:43.893 fused_ordering(510) 00:13:43.893 fused_ordering(511) 00:13:43.893 fused_ordering(512) 00:13:43.893 fused_ordering(513) 00:13:43.893 fused_ordering(514) 00:13:43.893 fused_ordering(515) 00:13:43.893 fused_ordering(516) 00:13:43.893 fused_ordering(517) 00:13:43.893 fused_ordering(518) 00:13:43.893 fused_ordering(519) 00:13:43.893 fused_ordering(520) 00:13:43.893 fused_ordering(521) 00:13:43.893 fused_ordering(522) 00:13:43.893 fused_ordering(523) 00:13:43.893 fused_ordering(524) 00:13:43.893 fused_ordering(525) 00:13:43.893 fused_ordering(526) 00:13:43.893 fused_ordering(527) 00:13:43.893 fused_ordering(528) 00:13:43.893 fused_ordering(529) 00:13:43.893 fused_ordering(530) 00:13:43.893 fused_ordering(531) 00:13:43.893 fused_ordering(532) 00:13:43.893 fused_ordering(533) 00:13:43.893 fused_ordering(534) 00:13:43.893 fused_ordering(535) 00:13:43.893 fused_ordering(536) 00:13:43.893 fused_ordering(537) 00:13:43.893 fused_ordering(538) 00:13:43.893 fused_ordering(539) 00:13:43.893 fused_ordering(540) 00:13:43.893 fused_ordering(541) 00:13:43.893 fused_ordering(542) 00:13:43.893 fused_ordering(543) 00:13:43.893 fused_ordering(544) 00:13:43.893 fused_ordering(545) 00:13:43.893 fused_ordering(546) 00:13:43.893 fused_ordering(547) 00:13:43.893 fused_ordering(548) 00:13:43.893 fused_ordering(549) 00:13:43.893 fused_ordering(550) 00:13:43.893 fused_ordering(551) 00:13:43.893 fused_ordering(552) 00:13:43.893 fused_ordering(553) 00:13:43.893 fused_ordering(554) 00:13:43.893 fused_ordering(555) 00:13:43.893 fused_ordering(556) 00:13:43.893 fused_ordering(557) 00:13:43.893 fused_ordering(558) 00:13:43.893 fused_ordering(559) 00:13:43.893 fused_ordering(560) 00:13:43.893 fused_ordering(561) 00:13:43.893 fused_ordering(562) 00:13:43.893 fused_ordering(563) 00:13:43.893 fused_ordering(564) 00:13:43.893 fused_ordering(565) 00:13:43.893 fused_ordering(566) 00:13:43.893 fused_ordering(567) 00:13:43.893 fused_ordering(568) 00:13:43.893 fused_ordering(569) 00:13:43.893 fused_ordering(570) 00:13:43.893 fused_ordering(571) 00:13:43.893 fused_ordering(572) 00:13:43.893 fused_ordering(573) 00:13:43.893 fused_ordering(574) 00:13:43.893 fused_ordering(575) 00:13:43.893 fused_ordering(576) 00:13:43.893 fused_ordering(577) 00:13:43.893 fused_ordering(578) 00:13:43.893 fused_ordering(579) 00:13:43.893 fused_ordering(580) 00:13:43.893 fused_ordering(581) 00:13:43.893 fused_ordering(582) 00:13:43.893 fused_ordering(583) 00:13:43.893 fused_ordering(584) 00:13:43.893 fused_ordering(585) 00:13:43.893 fused_ordering(586) 00:13:43.893 fused_ordering(587) 00:13:43.893 fused_ordering(588) 00:13:43.893 fused_ordering(589) 00:13:43.893 fused_ordering(590) 00:13:43.893 fused_ordering(591) 00:13:43.893 fused_ordering(592) 00:13:43.893 fused_ordering(593) 00:13:43.893 fused_ordering(594) 00:13:43.893 fused_ordering(595) 00:13:43.893 fused_ordering(596) 00:13:43.893 fused_ordering(597) 00:13:43.893 fused_ordering(598) 00:13:43.893 fused_ordering(599) 00:13:43.893 fused_ordering(600) 00:13:43.893 fused_ordering(601) 00:13:43.893 fused_ordering(602) 00:13:43.893 fused_ordering(603) 00:13:43.893 fused_ordering(604) 00:13:43.894 fused_ordering(605) 00:13:43.894 fused_ordering(606) 00:13:43.894 fused_ordering(607) 00:13:43.894 fused_ordering(608) 00:13:43.894 fused_ordering(609) 00:13:43.894 fused_ordering(610) 00:13:43.894 fused_ordering(611) 00:13:43.894 fused_ordering(612) 00:13:43.894 fused_ordering(613) 00:13:43.894 fused_ordering(614) 00:13:43.894 fused_ordering(615) 00:13:43.894 fused_ordering(616) 00:13:43.894 fused_ordering(617) 00:13:43.894 fused_ordering(618) 00:13:43.894 fused_ordering(619) 00:13:43.894 fused_ordering(620) 00:13:43.894 fused_ordering(621) 00:13:43.894 fused_ordering(622) 00:13:43.894 fused_ordering(623) 00:13:43.894 fused_ordering(624) 00:13:43.894 fused_ordering(625) 00:13:43.894 fused_ordering(626) 00:13:43.894 fused_ordering(627) 00:13:43.894 fused_ordering(628) 00:13:43.894 fused_ordering(629) 00:13:43.894 fused_ordering(630) 00:13:43.894 fused_ordering(631) 00:13:43.894 fused_ordering(632) 00:13:43.894 fused_ordering(633) 00:13:43.894 fused_ordering(634) 00:13:43.894 fused_ordering(635) 00:13:43.894 fused_ordering(636) 00:13:43.894 fused_ordering(637) 00:13:43.894 fused_ordering(638) 00:13:43.894 fused_ordering(639) 00:13:43.894 fused_ordering(640) 00:13:43.894 fused_ordering(641) 00:13:43.894 fused_ordering(642) 00:13:43.894 fused_ordering(643) 00:13:43.894 fused_ordering(644) 00:13:43.894 fused_ordering(645) 00:13:43.894 fused_ordering(646) 00:13:43.894 fused_ordering(647) 00:13:43.894 fused_ordering(648) 00:13:43.894 fused_ordering(649) 00:13:43.894 fused_ordering(650) 00:13:43.894 fused_ordering(651) 00:13:43.894 fused_ordering(652) 00:13:43.894 fused_ordering(653) 00:13:43.894 fused_ordering(654) 00:13:43.894 fused_ordering(655) 00:13:43.894 fused_ordering(656) 00:13:43.894 fused_ordering(657) 00:13:43.894 fused_ordering(658) 00:13:43.894 fused_ordering(659) 00:13:43.894 fused_ordering(660) 00:13:43.894 fused_ordering(661) 00:13:43.894 fused_ordering(662) 00:13:43.894 fused_ordering(663) 00:13:43.894 fused_ordering(664) 00:13:43.894 fused_ordering(665) 00:13:43.894 fused_ordering(666) 00:13:43.894 fused_ordering(667) 00:13:43.894 fused_ordering(668) 00:13:43.894 fused_ordering(669) 00:13:43.894 fused_ordering(670) 00:13:43.894 fused_ordering(671) 00:13:43.894 fused_ordering(672) 00:13:43.894 fused_ordering(673) 00:13:43.894 fused_ordering(674) 00:13:43.894 fused_ordering(675) 00:13:43.894 fused_ordering(676) 00:13:43.894 fused_ordering(677) 00:13:43.894 fused_ordering(678) 00:13:43.894 fused_ordering(679) 00:13:43.894 fused_ordering(680) 00:13:43.894 fused_ordering(681) 00:13:43.894 fused_ordering(682) 00:13:43.894 fused_ordering(683) 00:13:43.894 fused_ordering(684) 00:13:43.894 fused_ordering(685) 00:13:43.894 fused_ordering(686) 00:13:43.894 fused_ordering(687) 00:13:43.894 fused_ordering(688) 00:13:43.894 fused_ordering(689) 00:13:43.894 fused_ordering(690) 00:13:43.894 fused_ordering(691) 00:13:43.894 fused_ordering(692) 00:13:43.894 fused_ordering(693) 00:13:43.894 fused_ordering(694) 00:13:43.894 fused_ordering(695) 00:13:43.894 fused_ordering(696) 00:13:43.894 fused_ordering(697) 00:13:43.894 fused_ordering(698) 00:13:43.894 fused_ordering(699) 00:13:43.894 fused_ordering(700) 00:13:43.894 fused_ordering(701) 00:13:43.894 fused_ordering(702) 00:13:43.894 fused_ordering(703) 00:13:43.894 fused_ordering(704) 00:13:43.894 fused_ordering(705) 00:13:43.894 fused_ordering(706) 00:13:43.894 fused_ordering(707) 00:13:43.894 fused_ordering(708) 00:13:43.894 fused_ordering(709) 00:13:43.894 fused_ordering(710) 00:13:43.894 fused_ordering(711) 00:13:43.894 fused_ordering(712) 00:13:43.894 fused_ordering(713) 00:13:43.894 fused_ordering(714) 00:13:43.894 fused_ordering(715) 00:13:43.894 fused_ordering(716) 00:13:43.894 fused_ordering(717) 00:13:43.894 fused_ordering(718) 00:13:43.894 fused_ordering(719) 00:13:43.894 fused_ordering(720) 00:13:43.894 fused_ordering(721) 00:13:43.894 fused_ordering(722) 00:13:43.894 fused_ordering(723) 00:13:43.894 fused_ordering(724) 00:13:43.894 fused_ordering(725) 00:13:43.894 fused_ordering(726) 00:13:43.894 fused_ordering(727) 00:13:43.894 fused_ordering(728) 00:13:43.894 fused_ordering(729) 00:13:43.894 fused_ordering(730) 00:13:43.894 fused_ordering(731) 00:13:43.894 fused_ordering(732) 00:13:43.894 fused_ordering(733) 00:13:43.894 fused_ordering(734) 00:13:43.894 fused_ordering(735) 00:13:43.894 fused_ordering(736) 00:13:43.894 fused_ordering(737) 00:13:43.894 fused_ordering(738) 00:13:43.894 fused_ordering(739) 00:13:43.894 fused_ordering(740) 00:13:43.894 fused_ordering(741) 00:13:43.894 fused_ordering(742) 00:13:43.894 fused_ordering(743) 00:13:43.894 fused_ordering(744) 00:13:43.894 fused_ordering(745) 00:13:43.894 fused_ordering(746) 00:13:43.894 fused_ordering(747) 00:13:43.894 fused_ordering(748) 00:13:43.894 fused_ordering(749) 00:13:43.894 fused_ordering(750) 00:13:43.894 fused_ordering(751) 00:13:43.894 fused_ordering(752) 00:13:43.894 fused_ordering(753) 00:13:43.894 fused_ordering(754) 00:13:43.894 fused_ordering(755) 00:13:43.894 fused_ordering(756) 00:13:43.894 fused_ordering(757) 00:13:43.894 fused_ordering(758) 00:13:43.894 fused_ordering(759) 00:13:43.894 fused_ordering(760) 00:13:43.894 fused_ordering(761) 00:13:43.894 fused_ordering(762) 00:13:43.894 fused_ordering(763) 00:13:43.894 fused_ordering(764) 00:13:43.894 fused_ordering(765) 00:13:43.894 fused_ordering(766) 00:13:43.894 fused_ordering(767) 00:13:43.894 fused_ordering(768) 00:13:43.894 fused_ordering(769) 00:13:43.894 fused_ordering(770) 00:13:43.894 fused_ordering(771) 00:13:43.894 fused_ordering(772) 00:13:43.894 fused_ordering(773) 00:13:43.894 fused_ordering(774) 00:13:43.894 fused_ordering(775) 00:13:43.894 fused_ordering(776) 00:13:43.894 fused_ordering(777) 00:13:43.894 fused_ordering(778) 00:13:43.894 fused_ordering(779) 00:13:43.894 fused_ordering(780) 00:13:43.894 fused_ordering(781) 00:13:43.894 fused_ordering(782) 00:13:43.894 fused_ordering(783) 00:13:43.894 fused_ordering(784) 00:13:43.894 fused_ordering(785) 00:13:43.894 fused_ordering(786) 00:13:43.894 fused_ordering(787) 00:13:43.894 fused_ordering(788) 00:13:43.894 fused_ordering(789) 00:13:43.894 fused_ordering(790) 00:13:43.894 fused_ordering(791) 00:13:43.894 fused_ordering(792) 00:13:43.894 fused_ordering(793) 00:13:43.894 fused_ordering(794) 00:13:43.894 fused_ordering(795) 00:13:43.894 fused_ordering(796) 00:13:43.894 fused_ordering(797) 00:13:43.894 fused_ordering(798) 00:13:43.894 fused_ordering(799) 00:13:43.894 fused_ordering(800) 00:13:43.894 fused_ordering(801) 00:13:43.894 fused_ordering(802) 00:13:43.894 fused_ordering(803) 00:13:43.894 fused_ordering(804) 00:13:43.894 fused_ordering(805) 00:13:43.894 fused_ordering(806) 00:13:43.894 fused_ordering(807) 00:13:43.894 fused_ordering(808) 00:13:43.894 fused_ordering(809) 00:13:43.894 fused_ordering(810) 00:13:43.894 fused_ordering(811) 00:13:43.894 fused_ordering(812) 00:13:43.894 fused_ordering(813) 00:13:43.894 fused_ordering(814) 00:13:43.894 fused_ordering(815) 00:13:43.894 fused_ordering(816) 00:13:43.894 fused_ordering(817) 00:13:43.894 fused_ordering(818) 00:13:43.894 fused_ordering(819) 00:13:43.894 fused_ordering(820) 00:13:44.154 fused_ordering(821) 00:13:44.154 fused_ordering(822) 00:13:44.154 fused_ordering(823) 00:13:44.154 fused_ordering(824) 00:13:44.154 fused_ordering(825) 00:13:44.154 fused_ordering(826) 00:13:44.154 fused_ordering(827) 00:13:44.154 fused_ordering(828) 00:13:44.154 fused_ordering(829) 00:13:44.154 fused_ordering(830) 00:13:44.154 fused_ordering(831) 00:13:44.154 fused_ordering(832) 00:13:44.154 fused_ordering(833) 00:13:44.154 fused_ordering(834) 00:13:44.154 fused_ordering(835) 00:13:44.154 fused_ordering(836) 00:13:44.154 fused_ordering(837) 00:13:44.154 fused_ordering(838) 00:13:44.154 fused_ordering(839) 00:13:44.154 fused_ordering(840) 00:13:44.154 fused_ordering(841) 00:13:44.154 fused_ordering(842) 00:13:44.154 fused_ordering(843) 00:13:44.154 fused_ordering(844) 00:13:44.154 fused_ordering(845) 00:13:44.154 fused_ordering(846) 00:13:44.154 fused_ordering(847) 00:13:44.154 fused_ordering(848) 00:13:44.154 fused_ordering(849) 00:13:44.154 fused_ordering(850) 00:13:44.154 fused_ordering(851) 00:13:44.154 fused_ordering(852) 00:13:44.154 fused_ordering(853) 00:13:44.154 fused_ordering(854) 00:13:44.154 fused_ordering(855) 00:13:44.154 fused_ordering(856) 00:13:44.154 fused_ordering(857) 00:13:44.154 fused_ordering(858) 00:13:44.154 fused_ordering(859) 00:13:44.154 fused_ordering(860) 00:13:44.154 fused_ordering(861) 00:13:44.154 fused_ordering(862) 00:13:44.154 fused_ordering(863) 00:13:44.154 fused_ordering(864) 00:13:44.154 fused_ordering(865) 00:13:44.154 fused_ordering(866) 00:13:44.154 fused_ordering(867) 00:13:44.154 fused_ordering(868) 00:13:44.154 fused_ordering(869) 00:13:44.154 fused_ordering(870) 00:13:44.154 fused_ordering(871) 00:13:44.154 fused_ordering(872) 00:13:44.154 fused_ordering(873) 00:13:44.154 fused_ordering(874) 00:13:44.154 fused_ordering(875) 00:13:44.154 fused_ordering(876) 00:13:44.154 fused_ordering(877) 00:13:44.154 fused_ordering(878) 00:13:44.154 fused_ordering(879) 00:13:44.154 fused_ordering(880) 00:13:44.154 fused_ordering(881) 00:13:44.154 fused_ordering(882) 00:13:44.154 fused_ordering(883) 00:13:44.154 fused_ordering(884) 00:13:44.154 fused_ordering(885) 00:13:44.154 fused_ordering(886) 00:13:44.154 fused_ordering(887) 00:13:44.154 fused_ordering(888) 00:13:44.154 fused_ordering(889) 00:13:44.154 fused_ordering(890) 00:13:44.154 fused_ordering(891) 00:13:44.154 fused_ordering(892) 00:13:44.154 fused_ordering(893) 00:13:44.154 fused_ordering(894) 00:13:44.154 fused_ordering(895) 00:13:44.154 fused_ordering(896) 00:13:44.154 fused_ordering(897) 00:13:44.154 fused_ordering(898) 00:13:44.154 fused_ordering(899) 00:13:44.154 fused_ordering(900) 00:13:44.154 fused_ordering(901) 00:13:44.154 fused_ordering(902) 00:13:44.154 fused_ordering(903) 00:13:44.154 fused_ordering(904) 00:13:44.154 fused_ordering(905) 00:13:44.154 fused_ordering(906) 00:13:44.154 fused_ordering(907) 00:13:44.154 fused_ordering(908) 00:13:44.154 fused_ordering(909) 00:13:44.154 fused_ordering(910) 00:13:44.154 fused_ordering(911) 00:13:44.154 fused_ordering(912) 00:13:44.154 fused_ordering(913) 00:13:44.154 fused_ordering(914) 00:13:44.154 fused_ordering(915) 00:13:44.154 fused_ordering(916) 00:13:44.154 fused_ordering(917) 00:13:44.154 fused_ordering(918) 00:13:44.154 fused_ordering(919) 00:13:44.154 fused_ordering(920) 00:13:44.154 fused_ordering(921) 00:13:44.154 fused_ordering(922) 00:13:44.154 fused_ordering(923) 00:13:44.154 fused_ordering(924) 00:13:44.154 fused_ordering(925) 00:13:44.154 fused_ordering(926) 00:13:44.154 fused_ordering(927) 00:13:44.154 fused_ordering(928) 00:13:44.154 fused_ordering(929) 00:13:44.154 fused_ordering(930) 00:13:44.154 fused_ordering(931) 00:13:44.154 fused_ordering(932) 00:13:44.154 fused_ordering(933) 00:13:44.154 fused_ordering(934) 00:13:44.154 fused_ordering(935) 00:13:44.154 fused_ordering(936) 00:13:44.154 fused_ordering(937) 00:13:44.154 fused_ordering(938) 00:13:44.154 fused_ordering(939) 00:13:44.154 fused_ordering(940) 00:13:44.154 fused_ordering(941) 00:13:44.154 fused_ordering(942) 00:13:44.154 fused_ordering(943) 00:13:44.154 fused_ordering(944) 00:13:44.154 fused_ordering(945) 00:13:44.154 fused_ordering(946) 00:13:44.154 fused_ordering(947) 00:13:44.154 fused_ordering(948) 00:13:44.154 fused_ordering(949) 00:13:44.154 fused_ordering(950) 00:13:44.154 fused_ordering(951) 00:13:44.154 fused_ordering(952) 00:13:44.154 fused_ordering(953) 00:13:44.154 fused_ordering(954) 00:13:44.154 fused_ordering(955) 00:13:44.154 fused_ordering(956) 00:13:44.154 fused_ordering(957) 00:13:44.154 fused_ordering(958) 00:13:44.154 fused_ordering(959) 00:13:44.154 fused_ordering(960) 00:13:44.154 fused_ordering(961) 00:13:44.154 fused_ordering(962) 00:13:44.154 fused_ordering(963) 00:13:44.154 fused_ordering(964) 00:13:44.154 fused_ordering(965) 00:13:44.154 fused_ordering(966) 00:13:44.154 fused_ordering(967) 00:13:44.154 fused_ordering(968) 00:13:44.154 fused_ordering(969) 00:13:44.154 fused_ordering(970) 00:13:44.154 fused_ordering(971) 00:13:44.154 fused_ordering(972) 00:13:44.154 fused_ordering(973) 00:13:44.154 fused_ordering(974) 00:13:44.154 fused_ordering(975) 00:13:44.154 fused_ordering(976) 00:13:44.154 fused_ordering(977) 00:13:44.154 fused_ordering(978) 00:13:44.154 fused_ordering(979) 00:13:44.154 fused_ordering(980) 00:13:44.154 fused_ordering(981) 00:13:44.154 fused_ordering(982) 00:13:44.154 fused_ordering(983) 00:13:44.154 fused_ordering(984) 00:13:44.154 fused_ordering(985) 00:13:44.154 fused_ordering(986) 00:13:44.154 fused_ordering(987) 00:13:44.154 fused_ordering(988) 00:13:44.154 fused_ordering(989) 00:13:44.154 fused_ordering(990) 00:13:44.154 fused_ordering(991) 00:13:44.154 fused_ordering(992) 00:13:44.154 fused_ordering(993) 00:13:44.154 fused_ordering(994) 00:13:44.154 fused_ordering(995) 00:13:44.154 fused_ordering(996) 00:13:44.154 fused_ordering(997) 00:13:44.154 fused_ordering(998) 00:13:44.154 fused_ordering(999) 00:13:44.154 fused_ordering(1000) 00:13:44.154 fused_ordering(1001) 00:13:44.154 fused_ordering(1002) 00:13:44.154 fused_ordering(1003) 00:13:44.154 fused_ordering(1004) 00:13:44.154 fused_ordering(1005) 00:13:44.154 fused_ordering(1006) 00:13:44.154 fused_ordering(1007) 00:13:44.154 fused_ordering(1008) 00:13:44.154 fused_ordering(1009) 00:13:44.154 fused_ordering(1010) 00:13:44.154 fused_ordering(1011) 00:13:44.154 fused_ordering(1012) 00:13:44.154 fused_ordering(1013) 00:13:44.154 fused_ordering(1014) 00:13:44.154 fused_ordering(1015) 00:13:44.154 fused_ordering(1016) 00:13:44.154 fused_ordering(1017) 00:13:44.154 fused_ordering(1018) 00:13:44.154 fused_ordering(1019) 00:13:44.154 fused_ordering(1020) 00:13:44.154 fused_ordering(1021) 00:13:44.154 fused_ordering(1022) 00:13:44.154 fused_ordering(1023) 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:13:44.155 rmmod nvme_rdma 00:13:44.155 rmmod nvme_fabrics 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2520708 ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2520708 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2520708 ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2520708 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2520708 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2520708' 00:13:44.155 killing process with pid 2520708 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2520708 00:13:44.155 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2520708 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:13:44.414 00:13:44.414 real 0m7.846s 00:13:44.414 user 0m4.366s 00:13:44.414 sys 0m4.718s 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:44.414 ************************************ 00:13:44.414 END TEST nvmf_fused_ordering 00:13:44.414 ************************************ 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=rdma 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.414 10:02:29 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:44.673 ************************************ 00:13:44.673 START TEST nvmf_ns_masking 00:13:44.673 ************************************ 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=rdma 00:13:44.673 * Looking for test storage... 00:13:44.673 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:44.673 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=bdca7ffa-386a-4ceb-97e9-92cc9d89d3ed 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=8ccebc59-71b2-40b8-8753-6786275cd273 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=13c95ce5-b7f6-433c-b470-2a19f014e306 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:44.674 10:02:29 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:13:51.241 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:13:51.241 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.241 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:13:51.242 Found net devices under 0000:da:00.0: mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:13:51.242 Found net devices under 0000:da:00.1: mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@420 -- # rdma_device_init 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # uname 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@62 -- # modprobe ib_cm 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@63 -- # modprobe ib_core 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@64 -- # modprobe ib_umad 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@66 -- # modprobe iw_cm 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # allocate_nic_ips 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # get_rdma_if_list 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:13:51.242 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.242 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:13:51.242 altname enp218s0f0np0 00:13:51.242 altname ens818f0np0 00:13:51.242 inet 192.168.100.8/24 scope global mlx_0_0 00:13:51.242 valid_lft forever preferred_lft forever 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:13:51.242 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:13:51.242 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:13:51.242 altname enp218s0f1np1 00:13:51.242 altname ens818f1np1 00:13:51.242 inet 192.168.100.9/24 scope global mlx_0_1 00:13:51.242 valid_lft forever preferred_lft forever 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # get_rdma_if_list 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # echo mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # continue 2 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # awk '{print $4}' 00:13:51.242 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@113 -- # cut -d/ -f1 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:13:51.243 192.168.100.9' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:13:51.243 192.168.100.9' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # head -n 1 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:13:51.243 192.168.100.9' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # tail -n +2 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # head -n 1 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2524033 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2524033 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2524033 ']' 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.243 10:02:35 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.243 [2024-07-25 10:02:35.432294] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:51.243 [2024-07-25 10:02:35.432342] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.243 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.243 [2024-07-25 10:02:35.500283] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.243 [2024-07-25 10:02:35.577481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.243 [2024-07-25 10:02:35.577516] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.243 [2024-07-25 10:02:35.577523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.243 [2024-07-25 10:02:35.577529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.243 [2024-07-25 10:02:35.577534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.243 [2024-07-25 10:02:35.577550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:51.243 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:13:51.502 [2024-07-25 10:02:36.438617] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2173910/0x2177e00) succeed. 00:13:51.502 [2024-07-25 10:02:36.447334] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2174e10/0x21b9490) succeed. 00:13:51.502 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:51.502 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:51.502 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:51.761 Malloc1 00:13:51.761 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:51.761 Malloc2 00:13:51.761 10:02:36 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.019 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:52.278 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:13:52.278 [2024-07-25 10:02:37.396256] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:13:52.278 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:52.278 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 13c95ce5-b7f6-433c-b470-2a19f014e306 -a 192.168.100.8 -s 4420 -i 4 00:13:52.846 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:52.846 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:52.846 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:52.846 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:52.846 10:02:37 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:54.750 [ 0]:0x1 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f4043bf30344446b8634a76eb6308e1 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f4043bf30344446b8634a76eb6308e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:54.750 10:02:39 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:55.009 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.010 [ 0]:0x1 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f4043bf30344446b8634a76eb6308e1 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f4043bf30344446b8634a76eb6308e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.010 [ 1]:0x2 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:55.010 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.578 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.578 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:55.836 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:55.836 10:02:40 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 13c95ce5-b7f6-433c-b470-2a19f014e306 -a 192.168.100.8 -s 4420 -i 4 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:13:56.095 10:02:41 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:57.998 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.256 [ 0]:0x2 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.256 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:13:58.257 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.257 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.522 [ 0]:0x1 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f4043bf30344446b8634a76eb6308e1 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f4043bf30344446b8634a76eb6308e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.522 [ 1]:0x2 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.522 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.782 [ 0]:0x2 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:58.782 10:02:43 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.041 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.300 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:59.300 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 13c95ce5-b7f6-433c-b470-2a19f014e306 -a 192.168.100.8 -s 4420 -i 4 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:59.558 10:02:44 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.462 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.721 [ 0]:0x1 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0f4043bf30344446b8634a76eb6308e1 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0f4043bf30344446b8634a76eb6308e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.721 [ 1]:0x2 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.721 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.980 10:02:46 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.980 [ 0]:0x2 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:01.980 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:14:01.981 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.240 [2024-07-25 10:02:47.199423] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:02.240 request: 00:14:02.240 { 00:14:02.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.240 "nsid": 2, 00:14:02.240 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.240 "method": "nvmf_ns_remove_host", 00:14:02.240 "req_id": 1 00:14:02.240 } 00:14:02.240 Got JSON-RPC error response 00:14:02.240 response: 00:14:02.240 { 00:14:02.240 "code": -32602, 00:14:02.240 "message": "Invalid parameters" 00:14:02.240 } 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.240 [ 0]:0x2 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c769364c75314526b212afa67c1e33e3 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c769364c75314526b212afa67c1e33e3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:02.240 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2526260 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2526260 /var/tmp/host.sock 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2526260 ']' 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:02.759 10:02:47 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.759 [2024-07-25 10:02:47.709339] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:02.759 [2024-07-25 10:02:47.709382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2526260 ] 00:14:02.759 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.759 [2024-07-25 10:02:47.773922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.759 [2024-07-25 10:02:47.846176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.696 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:03.696 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:03.697 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.697 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.955 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid bdca7ffa-386a-4ceb-97e9-92cc9d89d3ed 00:14:03.955 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:03.955 10:02:48 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g BDCA7FFA386A4CEB97E992CC9D89D3ED -i 00:14:03.955 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 8ccebc59-71b2-40b8-8753-6786275cd273 00:14:03.955 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:03.955 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 8CCEBC5971B240B887536786275CD273 -i 00:14:04.214 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.473 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:04.473 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.473 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.732 nvme0n1 00:14:04.732 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.732 10:02:49 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.990 nvme1n2 00:14:04.990 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:04.990 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:04.991 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:04.991 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:04.991 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:05.249 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:05.249 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:05.249 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:05.249 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ bdca7ffa-386a-4ceb-97e9-92cc9d89d3ed == \b\d\c\a\7\f\f\a\-\3\8\6\a\-\4\c\e\b\-\9\7\e\9\-\9\2\c\c\9\d\8\9\d\3\e\d ]] 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 8ccebc59-71b2-40b8-8753-6786275cd273 == \8\c\c\e\b\c\5\9\-\7\1\b\2\-\4\0\b\8\-\8\7\5\3\-\6\7\8\6\2\7\5\c\d\2\7\3 ]] 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2526260 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2526260 ']' 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2526260 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:05.508 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2526260 00:14:05.767 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:05.767 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:05.767 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2526260' 00:14:05.767 killing process with pid 2526260 00:14:05.767 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2526260 00:14:05.767 10:02:50 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2526260 00:14:06.025 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:06.340 rmmod nvme_rdma 00:14:06.340 rmmod nvme_fabrics 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2524033 ']' 00:14:06.340 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2524033 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2524033 ']' 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2524033 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2524033 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2524033' 00:14:06.341 killing process with pid 2524033 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2524033 00:14:06.341 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2524033 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:06.600 00:14:06.600 real 0m21.992s 00:14:06.600 user 0m25.872s 00:14:06.600 sys 0m6.173s 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:06.600 ************************************ 00:14:06.600 END TEST nvmf_ns_masking 00:14:06.600 ************************************ 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.600 ************************************ 00:14:06.600 START TEST nvmf_nvme_cli 00:14:06.600 ************************************ 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:14:06.600 * Looking for test storage... 00:14:06.600 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:06.600 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.860 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.860 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.860 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.860 10:02:51 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:12.134 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:12.135 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:12.135 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:12.135 Found net devices under 0000:da:00.0: mlx_0_0 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:12.135 Found net devices under 0000:da:00.1: mlx_0_1 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@420 -- # rdma_device_init 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # uname 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.135 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:12.395 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:12.395 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:12.395 altname enp218s0f0np0 00:14:12.395 altname ens818f0np0 00:14:12.395 inet 192.168.100.8/24 scope global mlx_0_0 00:14:12.395 valid_lft forever preferred_lft forever 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:12.395 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:12.395 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:12.395 altname enp218s0f1np1 00:14:12.395 altname ens818f1np1 00:14:12.395 inet 192.168.100.9/24 scope global mlx_0_1 00:14:12.395 valid_lft forever preferred_lft forever 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # continue 2 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:12.395 192.168.100.9' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:12.395 192.168.100.9' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # head -n 1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:12.395 192.168.100.9' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # tail -n +2 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # head -n 1 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2530044 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2530044 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2530044 ']' 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:12.395 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.396 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:12.396 10:02:57 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:12.396 [2024-07-25 10:02:57.498908] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:12.396 [2024-07-25 10:02:57.498956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.396 EAL: No free 2048 kB hugepages reported on node 1 00:14:12.654 [2024-07-25 10:02:57.566163] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.654 [2024-07-25 10:02:57.646427] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.654 [2024-07-25 10:02:57.646463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.654 [2024-07-25 10:02:57.646469] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.654 [2024-07-25 10:02:57.646476] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.654 [2024-07-25 10:02:57.646480] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.654 [2024-07-25 10:02:57.646535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.654 [2024-07-25 10:02:57.646643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.654 [2024-07-25 10:02:57.646747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.654 [2024-07-25 10:02:57.646748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.220 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 [2024-07-25 10:02:58.386167] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x18b4cc0/0x18b91b0) succeed. 00:14:13.479 [2024-07-25 10:02:58.395058] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x18b6300/0x18fa840) succeed. 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 Malloc0 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 Malloc1 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 [2024-07-25 10:02:58.577353] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.479 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -a 192.168.100.8 -s 4420 00:14:13.737 00:14:13.737 Discovery Log Number of Records 2, Generation counter 2 00:14:13.737 =====Discovery Log Entry 0====== 00:14:13.737 trtype: rdma 00:14:13.737 adrfam: ipv4 00:14:13.737 subtype: current discovery subsystem 00:14:13.737 treq: not required 00:14:13.737 portid: 0 00:14:13.737 trsvcid: 4420 00:14:13.737 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:13.737 traddr: 192.168.100.8 00:14:13.737 eflags: explicit discovery connections, duplicate discovery information 00:14:13.737 rdma_prtype: not specified 00:14:13.737 rdma_qptype: connected 00:14:13.737 rdma_cms: rdma-cm 00:14:13.737 rdma_pkey: 0x0000 00:14:13.737 =====Discovery Log Entry 1====== 00:14:13.737 trtype: rdma 00:14:13.737 adrfam: ipv4 00:14:13.737 subtype: nvme subsystem 00:14:13.737 treq: not required 00:14:13.737 portid: 0 00:14:13.737 trsvcid: 4420 00:14:13.737 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:13.737 traddr: 192.168.100.8 00:14:13.737 eflags: none 00:14:13.737 rdma_prtype: not specified 00:14:13.737 rdma_qptype: connected 00:14:13.737 rdma_cms: rdma-cm 00:14:13.737 rdma_pkey: 0x0000 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:13.737 10:02:58 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:14.671 10:02:59 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:16.571 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:16.572 /dev/nvme0n1 ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:16.572 10:03:01 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:17.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:14:17.582 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:14:17.583 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:17.583 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:17.583 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:14:17.583 rmmod nvme_rdma 00:14:17.841 rmmod nvme_fabrics 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2530044 ']' 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2530044 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2530044 ']' 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2530044 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2530044 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2530044' 00:14:17.841 killing process with pid 2530044 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2530044 00:14:17.841 10:03:02 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2530044 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:14:18.101 00:14:18.101 real 0m11.490s 00:14:18.101 user 0m23.397s 00:14:18.101 sys 0m4.822s 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.101 ************************************ 00:14:18.101 END TEST nvmf_nvme_cli 00:14:18.101 ************************************ 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 0 -eq 1 ]] 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.101 ************************************ 00:14:18.101 START TEST nvmf_auth_target 00:14:18.101 ************************************ 00:14:18.101 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=rdma 00:14:18.360 * Looking for test storage... 00:14:18.360 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:18.360 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:18.361 10:03:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.633 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:14:23.893 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:14:23.893 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:14:23.893 Found net devices under 0000:da:00.0: mlx_0_0 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:14:23.893 Found net devices under 0000:da:00.1: mlx_0_1 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@420 -- # rdma_device_init 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # uname 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:14:23.893 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:14:23.893 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.893 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:14:23.893 altname enp218s0f0np0 00:14:23.893 altname ens818f0np0 00:14:23.894 inet 192.168.100.8/24 scope global mlx_0_0 00:14:23.894 valid_lft forever preferred_lft forever 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:14:23.894 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:14:23.894 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:14:23.894 altname enp218s0f1np1 00:14:23.894 altname ens818f1np1 00:14:23.894 inet 192.168.100.9/24 scope global mlx_0_1 00:14:23.894 valid_lft forever preferred_lft forever 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # continue 2 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:14:23.894 192.168.100.9' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:14:23.894 192.168.100.9' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # head -n 1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:14:23.894 192.168.100.9' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # tail -n +2 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # head -n 1 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:23.894 10:03:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2534432 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2534432 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2534432 ']' 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:23.894 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2534805 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=725c83197c2e1e4b9afb8f27de167eb67eaf58e135a33121 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QjR 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 725c83197c2e1e4b9afb8f27de167eb67eaf58e135a33121 0 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 725c83197c2e1e4b9afb8f27de167eb67eaf58e135a33121 0 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=725c83197c2e1e4b9afb8f27de167eb67eaf58e135a33121 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QjR 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QjR 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.QjR 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a0fd5f7b45d6f2283f0e6a6bc6ae6c42daf9ec44ac182d91c8aefdd7d309d2c8 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:24.831 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yKc 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a0fd5f7b45d6f2283f0e6a6bc6ae6c42daf9ec44ac182d91c8aefdd7d309d2c8 3 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a0fd5f7b45d6f2283f0e6a6bc6ae6c42daf9ec44ac182d91c8aefdd7d309d2c8 3 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a0fd5f7b45d6f2283f0e6a6bc6ae6c42daf9ec44ac182d91c8aefdd7d309d2c8 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:24.832 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.090 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yKc 00:14:25.090 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yKc 00:14:25.090 10:03:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.yKc 00:14:25.090 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:25.090 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:25.090 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:25.090 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:25.090 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=38b72cb5a70ea4ddc553a7b0ec4583ce 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.A9o 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 38b72cb5a70ea4ddc553a7b0ec4583ce 1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 38b72cb5a70ea4ddc553a7b0ec4583ce 1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=38b72cb5a70ea4ddc553a7b0ec4583ce 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.A9o 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.A9o 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.A9o 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=307b7bc17a83ce7e6d33ffe9e4b3bc3bfbac83a18900ea4a 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tZs 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 307b7bc17a83ce7e6d33ffe9e4b3bc3bfbac83a18900ea4a 2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 307b7bc17a83ce7e6d33ffe9e4b3bc3bfbac83a18900ea4a 2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=307b7bc17a83ce7e6d33ffe9e4b3bc3bfbac83a18900ea4a 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tZs 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tZs 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.tZs 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=eca8a4e5b4c094236c481177cebdd4fd00e766bcf51d5960 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.bH4 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key eca8a4e5b4c094236c481177cebdd4fd00e766bcf51d5960 2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 eca8a4e5b4c094236c481177cebdd4fd00e766bcf51d5960 2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=eca8a4e5b4c094236c481177cebdd4fd00e766bcf51d5960 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.bH4 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.bH4 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.bH4 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=3658e85979efb65756919ee7f7ebc794 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.u84 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 3658e85979efb65756919ee7f7ebc794 1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 3658e85979efb65756919ee7f7ebc794 1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=3658e85979efb65756919ee7f7ebc794 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.u84 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.u84 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.u84 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7599348a6d77e3884ff6a9b7049e1bed496beb9b04093c8ba6fb17caee42483d 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.bIE 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7599348a6d77e3884ff6a9b7049e1bed496beb9b04093c8ba6fb17caee42483d 3 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7599348a6d77e3884ff6a9b7049e1bed496beb9b04093c8ba6fb17caee42483d 3 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:25.091 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7599348a6d77e3884ff6a9b7049e1bed496beb9b04093c8ba6fb17caee42483d 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.bIE 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.bIE 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.bIE 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2534432 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2534432 ']' 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2534805 /var/tmp/host.sock 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2534805 ']' 00:14:25.349 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:25.350 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.350 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:25.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:25.350 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.350 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QjR 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.QjR 00:14:25.608 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.QjR 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.yKc ]] 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yKc 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yKc 00:14:25.865 10:03:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yKc 00:14:26.123 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:26.123 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.A9o 00:14:26.123 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.123 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.124 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.124 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.A9o 00:14:26.124 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.A9o 00:14:26.381 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.tZs ]] 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tZs 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tZs 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.tZs 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.bH4 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.bH4 00:14:26.382 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.bH4 00:14:26.639 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.u84 ]] 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u84 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u84 00:14:26.640 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.u84 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.bIE 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.bIE 00:14:26.898 10:03:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.bIE 00:14:26.898 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:26.898 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:26.898 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:26.898 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.898 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.156 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:27.422 00:14:27.422 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.422 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.422 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.680 { 00:14:27.680 "cntlid": 1, 00:14:27.680 "qid": 0, 00:14:27.680 "state": "enabled", 00:14:27.680 "thread": "nvmf_tgt_poll_group_000", 00:14:27.680 "listen_address": { 00:14:27.680 "trtype": "RDMA", 00:14:27.680 "adrfam": "IPv4", 00:14:27.680 "traddr": "192.168.100.8", 00:14:27.680 "trsvcid": "4420" 00:14:27.680 }, 00:14:27.680 "peer_address": { 00:14:27.680 "trtype": "RDMA", 00:14:27.680 "adrfam": "IPv4", 00:14:27.680 "traddr": "192.168.100.8", 00:14:27.680 "trsvcid": "58776" 00:14:27.680 }, 00:14:27.680 "auth": { 00:14:27.680 "state": "completed", 00:14:27.680 "digest": "sha256", 00:14:27.680 "dhgroup": "null" 00:14:27.680 } 00:14:27.680 } 00:14:27.680 ]' 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.680 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.939 10:03:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:14:28.504 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.761 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.761 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:28.761 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.761 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.761 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.762 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.020 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.020 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.020 10:03:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:29.020 00:14:29.020 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.020 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.020 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.279 { 00:14:29.279 "cntlid": 3, 00:14:29.279 "qid": 0, 00:14:29.279 "state": "enabled", 00:14:29.279 "thread": "nvmf_tgt_poll_group_000", 00:14:29.279 "listen_address": { 00:14:29.279 "trtype": "RDMA", 00:14:29.279 "adrfam": "IPv4", 00:14:29.279 "traddr": "192.168.100.8", 00:14:29.279 "trsvcid": "4420" 00:14:29.279 }, 00:14:29.279 "peer_address": { 00:14:29.279 "trtype": "RDMA", 00:14:29.279 "adrfam": "IPv4", 00:14:29.279 "traddr": "192.168.100.8", 00:14:29.279 "trsvcid": "53225" 00:14:29.279 }, 00:14:29.279 "auth": { 00:14:29.279 "state": "completed", 00:14:29.279 "digest": "sha256", 00:14:29.279 "dhgroup": "null" 00:14:29.279 } 00:14:29.279 } 00:14:29.279 ]' 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.279 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.538 10:03:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.472 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:30.730 00:14:30.730 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.730 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.730 10:03:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.989 { 00:14:30.989 "cntlid": 5, 00:14:30.989 "qid": 0, 00:14:30.989 "state": "enabled", 00:14:30.989 "thread": "nvmf_tgt_poll_group_000", 00:14:30.989 "listen_address": { 00:14:30.989 "trtype": "RDMA", 00:14:30.989 "adrfam": "IPv4", 00:14:30.989 "traddr": "192.168.100.8", 00:14:30.989 "trsvcid": "4420" 00:14:30.989 }, 00:14:30.989 "peer_address": { 00:14:30.989 "trtype": "RDMA", 00:14:30.989 "adrfam": "IPv4", 00:14:30.989 "traddr": "192.168.100.8", 00:14:30.989 "trsvcid": "46513" 00:14:30.989 }, 00:14:30.989 "auth": { 00:14:30.989 "state": "completed", 00:14:30.989 "digest": "sha256", 00:14:30.989 "dhgroup": "null" 00:14:30.989 } 00:14:30.989 } 00:14:30.989 ]' 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:30.989 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.248 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.248 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.248 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:31.248 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:14:32.185 10:03:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.185 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.443 00:14:32.443 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.443 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.444 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.702 { 00:14:32.702 "cntlid": 7, 00:14:32.702 "qid": 0, 00:14:32.702 "state": "enabled", 00:14:32.702 "thread": "nvmf_tgt_poll_group_000", 00:14:32.702 "listen_address": { 00:14:32.702 "trtype": "RDMA", 00:14:32.702 "adrfam": "IPv4", 00:14:32.702 "traddr": "192.168.100.8", 00:14:32.702 "trsvcid": "4420" 00:14:32.702 }, 00:14:32.702 "peer_address": { 00:14:32.702 "trtype": "RDMA", 00:14:32.702 "adrfam": "IPv4", 00:14:32.702 "traddr": "192.168.100.8", 00:14:32.702 "trsvcid": "35086" 00:14:32.702 }, 00:14:32.702 "auth": { 00:14:32.702 "state": "completed", 00:14:32.702 "digest": "sha256", 00:14:32.702 "dhgroup": "null" 00:14:32.702 } 00:14:32.702 } 00:14:32.702 ]' 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.702 10:03:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.961 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:14:33.528 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.787 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.787 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:33.788 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.788 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.047 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.047 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.047 10:03:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.047 00:14:34.047 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.047 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.047 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.321 { 00:14:34.321 "cntlid": 9, 00:14:34.321 "qid": 0, 00:14:34.321 "state": "enabled", 00:14:34.321 "thread": "nvmf_tgt_poll_group_000", 00:14:34.321 "listen_address": { 00:14:34.321 "trtype": "RDMA", 00:14:34.321 "adrfam": "IPv4", 00:14:34.321 "traddr": "192.168.100.8", 00:14:34.321 "trsvcid": "4420" 00:14:34.321 }, 00:14:34.321 "peer_address": { 00:14:34.321 "trtype": "RDMA", 00:14:34.321 "adrfam": "IPv4", 00:14:34.321 "traddr": "192.168.100.8", 00:14:34.321 "trsvcid": "44380" 00:14:34.321 }, 00:14:34.321 "auth": { 00:14:34.321 "state": "completed", 00:14:34.321 "digest": "sha256", 00:14:34.321 "dhgroup": "ffdhe2048" 00:14:34.321 } 00:14:34.321 } 00:14:34.321 ]' 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.321 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.607 10:03:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.544 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:35.803 00:14:35.803 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.803 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.803 10:03:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.062 { 00:14:36.062 "cntlid": 11, 00:14:36.062 "qid": 0, 00:14:36.062 "state": "enabled", 00:14:36.062 "thread": "nvmf_tgt_poll_group_000", 00:14:36.062 "listen_address": { 00:14:36.062 "trtype": "RDMA", 00:14:36.062 "adrfam": "IPv4", 00:14:36.062 "traddr": "192.168.100.8", 00:14:36.062 "trsvcid": "4420" 00:14:36.062 }, 00:14:36.062 "peer_address": { 00:14:36.062 "trtype": "RDMA", 00:14:36.062 "adrfam": "IPv4", 00:14:36.062 "traddr": "192.168.100.8", 00:14:36.062 "trsvcid": "35047" 00:14:36.062 }, 00:14:36.062 "auth": { 00:14:36.062 "state": "completed", 00:14:36.062 "digest": "sha256", 00:14:36.062 "dhgroup": "ffdhe2048" 00:14:36.062 } 00:14:36.062 } 00:14:36.062 ]' 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.062 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.063 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.063 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.063 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.322 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:14:36.890 10:03:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.149 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:37.408 00:14:37.408 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.408 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.408 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.666 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.666 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.666 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.666 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.666 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.667 { 00:14:37.667 "cntlid": 13, 00:14:37.667 "qid": 0, 00:14:37.667 "state": "enabled", 00:14:37.667 "thread": "nvmf_tgt_poll_group_000", 00:14:37.667 "listen_address": { 00:14:37.667 "trtype": "RDMA", 00:14:37.667 "adrfam": "IPv4", 00:14:37.667 "traddr": "192.168.100.8", 00:14:37.667 "trsvcid": "4420" 00:14:37.667 }, 00:14:37.667 "peer_address": { 00:14:37.667 "trtype": "RDMA", 00:14:37.667 "adrfam": "IPv4", 00:14:37.667 "traddr": "192.168.100.8", 00:14:37.667 "trsvcid": "49771" 00:14:37.667 }, 00:14:37.667 "auth": { 00:14:37.667 "state": "completed", 00:14:37.667 "digest": "sha256", 00:14:37.667 "dhgroup": "ffdhe2048" 00:14:37.667 } 00:14:37.667 } 00:14:37.667 ]' 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:37.667 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.925 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.925 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.925 10:03:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.925 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:14:38.491 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:38.750 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.009 10:03:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:39.268 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.268 { 00:14:39.268 "cntlid": 15, 00:14:39.268 "qid": 0, 00:14:39.268 "state": "enabled", 00:14:39.268 "thread": "nvmf_tgt_poll_group_000", 00:14:39.268 "listen_address": { 00:14:39.268 "trtype": "RDMA", 00:14:39.268 "adrfam": "IPv4", 00:14:39.268 "traddr": "192.168.100.8", 00:14:39.268 "trsvcid": "4420" 00:14:39.268 }, 00:14:39.268 "peer_address": { 00:14:39.268 "trtype": "RDMA", 00:14:39.268 "adrfam": "IPv4", 00:14:39.268 "traddr": "192.168.100.8", 00:14:39.268 "trsvcid": "36689" 00:14:39.268 }, 00:14:39.268 "auth": { 00:14:39.268 "state": "completed", 00:14:39.268 "digest": "sha256", 00:14:39.268 "dhgroup": "ffdhe2048" 00:14:39.268 } 00:14:39.268 } 00:14:39.268 ]' 00:14:39.268 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.527 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:39.786 10:03:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.353 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.612 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.871 00:14:40.871 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.871 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.871 10:03:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:41.129 { 00:14:41.129 "cntlid": 17, 00:14:41.129 "qid": 0, 00:14:41.129 "state": "enabled", 00:14:41.129 "thread": "nvmf_tgt_poll_group_000", 00:14:41.129 "listen_address": { 00:14:41.129 "trtype": "RDMA", 00:14:41.129 "adrfam": "IPv4", 00:14:41.129 "traddr": "192.168.100.8", 00:14:41.129 "trsvcid": "4420" 00:14:41.129 }, 00:14:41.129 "peer_address": { 00:14:41.129 "trtype": "RDMA", 00:14:41.129 "adrfam": "IPv4", 00:14:41.129 "traddr": "192.168.100.8", 00:14:41.129 "trsvcid": "37920" 00:14:41.129 }, 00:14:41.129 "auth": { 00:14:41.129 "state": "completed", 00:14:41.129 "digest": "sha256", 00:14:41.129 "dhgroup": "ffdhe3072" 00:14:41.129 } 00:14:41.129 } 00:14:41.129 ]' 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:41.129 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:41.130 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:41.130 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:41.130 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:41.130 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.388 10:03:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:14:41.955 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.214 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.473 00:14:42.473 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.473 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.473 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.732 { 00:14:42.732 "cntlid": 19, 00:14:42.732 "qid": 0, 00:14:42.732 "state": "enabled", 00:14:42.732 "thread": "nvmf_tgt_poll_group_000", 00:14:42.732 "listen_address": { 00:14:42.732 "trtype": "RDMA", 00:14:42.732 "adrfam": "IPv4", 00:14:42.732 "traddr": "192.168.100.8", 00:14:42.732 "trsvcid": "4420" 00:14:42.732 }, 00:14:42.732 "peer_address": { 00:14:42.732 "trtype": "RDMA", 00:14:42.732 "adrfam": "IPv4", 00:14:42.732 "traddr": "192.168.100.8", 00:14:42.732 "trsvcid": "45038" 00:14:42.732 }, 00:14:42.732 "auth": { 00:14:42.732 "state": "completed", 00:14:42.732 "digest": "sha256", 00:14:42.732 "dhgroup": "ffdhe3072" 00:14:42.732 } 00:14:42.732 } 00:14:42.732 ]' 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.732 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.991 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.991 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.991 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.991 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.991 10:03:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.991 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.927 10:03:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.927 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.187 00:14:44.187 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.187 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.187 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:44.445 { 00:14:44.445 "cntlid": 21, 00:14:44.445 "qid": 0, 00:14:44.445 "state": "enabled", 00:14:44.445 "thread": "nvmf_tgt_poll_group_000", 00:14:44.445 "listen_address": { 00:14:44.445 "trtype": "RDMA", 00:14:44.445 "adrfam": "IPv4", 00:14:44.445 "traddr": "192.168.100.8", 00:14:44.445 "trsvcid": "4420" 00:14:44.445 }, 00:14:44.445 "peer_address": { 00:14:44.445 "trtype": "RDMA", 00:14:44.445 "adrfam": "IPv4", 00:14:44.445 "traddr": "192.168.100.8", 00:14:44.445 "trsvcid": "48570" 00:14:44.445 }, 00:14:44.445 "auth": { 00:14:44.445 "state": "completed", 00:14:44.445 "digest": "sha256", 00:14:44.445 "dhgroup": "ffdhe3072" 00:14:44.445 } 00:14:44.445 } 00:14:44.445 ]' 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:44.445 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:44.704 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.704 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.704 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.704 10:03:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:14:45.272 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:45.530 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:45.789 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.047 00:14:46.047 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:46.047 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:46.047 10:03:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:46.047 { 00:14:46.047 "cntlid": 23, 00:14:46.047 "qid": 0, 00:14:46.047 "state": "enabled", 00:14:46.047 "thread": "nvmf_tgt_poll_group_000", 00:14:46.047 "listen_address": { 00:14:46.047 "trtype": "RDMA", 00:14:46.047 "adrfam": "IPv4", 00:14:46.047 "traddr": "192.168.100.8", 00:14:46.047 "trsvcid": "4420" 00:14:46.047 }, 00:14:46.047 "peer_address": { 00:14:46.047 "trtype": "RDMA", 00:14:46.047 "adrfam": "IPv4", 00:14:46.047 "traddr": "192.168.100.8", 00:14:46.047 "trsvcid": "38989" 00:14:46.047 }, 00:14:46.047 "auth": { 00:14:46.047 "state": "completed", 00:14:46.047 "digest": "sha256", 00:14:46.047 "dhgroup": "ffdhe3072" 00:14:46.047 } 00:14:46.047 } 00:14:46.047 ]' 00:14:46.047 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.305 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.563 10:03:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:14:47.130 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.130 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:47.130 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.130 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.130 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.131 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.131 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:47.131 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.131 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.389 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.648 00:14:47.648 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.648 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.648 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.907 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.907 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.908 { 00:14:47.908 "cntlid": 25, 00:14:47.908 "qid": 0, 00:14:47.908 "state": "enabled", 00:14:47.908 "thread": "nvmf_tgt_poll_group_000", 00:14:47.908 "listen_address": { 00:14:47.908 "trtype": "RDMA", 00:14:47.908 "adrfam": "IPv4", 00:14:47.908 "traddr": "192.168.100.8", 00:14:47.908 "trsvcid": "4420" 00:14:47.908 }, 00:14:47.908 "peer_address": { 00:14:47.908 "trtype": "RDMA", 00:14:47.908 "adrfam": "IPv4", 00:14:47.908 "traddr": "192.168.100.8", 00:14:47.908 "trsvcid": "60910" 00:14:47.908 }, 00:14:47.908 "auth": { 00:14:47.908 "state": "completed", 00:14:47.908 "digest": "sha256", 00:14:47.908 "dhgroup": "ffdhe4096" 00:14:47.908 } 00:14:47.908 } 00:14:47.908 ]' 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.908 10:03:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.166 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:14:48.751 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.029 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.029 10:03:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.029 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.288 00:14:49.288 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.288 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.288 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.546 { 00:14:49.546 "cntlid": 27, 00:14:49.546 "qid": 0, 00:14:49.546 "state": "enabled", 00:14:49.546 "thread": "nvmf_tgt_poll_group_000", 00:14:49.546 "listen_address": { 00:14:49.546 "trtype": "RDMA", 00:14:49.546 "adrfam": "IPv4", 00:14:49.546 "traddr": "192.168.100.8", 00:14:49.546 "trsvcid": "4420" 00:14:49.546 }, 00:14:49.546 "peer_address": { 00:14:49.546 "trtype": "RDMA", 00:14:49.546 "adrfam": "IPv4", 00:14:49.546 "traddr": "192.168.100.8", 00:14:49.546 "trsvcid": "54894" 00:14:49.546 }, 00:14:49.546 "auth": { 00:14:49.546 "state": "completed", 00:14:49.546 "digest": "sha256", 00:14:49.546 "dhgroup": "ffdhe4096" 00:14:49.546 } 00:14:49.546 } 00:14:49.546 ]' 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.546 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.547 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:49.547 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.547 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.547 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.547 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.805 10:03:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:14:50.372 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.631 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.631 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.890 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.890 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.890 10:03:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:51.149 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.149 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:51.149 { 00:14:51.150 "cntlid": 29, 00:14:51.150 "qid": 0, 00:14:51.150 "state": "enabled", 00:14:51.150 "thread": "nvmf_tgt_poll_group_000", 00:14:51.150 "listen_address": { 00:14:51.150 "trtype": "RDMA", 00:14:51.150 "adrfam": "IPv4", 00:14:51.150 "traddr": "192.168.100.8", 00:14:51.150 "trsvcid": "4420" 00:14:51.150 }, 00:14:51.150 "peer_address": { 00:14:51.150 "trtype": "RDMA", 00:14:51.150 "adrfam": "IPv4", 00:14:51.150 "traddr": "192.168.100.8", 00:14:51.150 "trsvcid": "54727" 00:14:51.150 }, 00:14:51.150 "auth": { 00:14:51.150 "state": "completed", 00:14:51.150 "digest": "sha256", 00:14:51.150 "dhgroup": "ffdhe4096" 00:14:51.150 } 00:14:51.150 } 00:14:51.150 ]' 00:14:51.150 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.410 10:03:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:52.346 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.347 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.605 00:14:52.605 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:52.605 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.605 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.864 { 00:14:52.864 "cntlid": 31, 00:14:52.864 "qid": 0, 00:14:52.864 "state": "enabled", 00:14:52.864 "thread": "nvmf_tgt_poll_group_000", 00:14:52.864 "listen_address": { 00:14:52.864 "trtype": "RDMA", 00:14:52.864 "adrfam": "IPv4", 00:14:52.864 "traddr": "192.168.100.8", 00:14:52.864 "trsvcid": "4420" 00:14:52.864 }, 00:14:52.864 "peer_address": { 00:14:52.864 "trtype": "RDMA", 00:14:52.864 "adrfam": "IPv4", 00:14:52.864 "traddr": "192.168.100.8", 00:14:52.864 "trsvcid": "41322" 00:14:52.864 }, 00:14:52.864 "auth": { 00:14:52.864 "state": "completed", 00:14:52.864 "digest": "sha256", 00:14:52.864 "dhgroup": "ffdhe4096" 00:14:52.864 } 00:14:52.864 } 00:14:52.864 ]' 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.864 10:03:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.123 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.060 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.060 10:03:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.060 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:54.627 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.627 { 00:14:54.627 "cntlid": 33, 00:14:54.627 "qid": 0, 00:14:54.627 "state": "enabled", 00:14:54.627 "thread": "nvmf_tgt_poll_group_000", 00:14:54.627 "listen_address": { 00:14:54.627 "trtype": "RDMA", 00:14:54.627 "adrfam": "IPv4", 00:14:54.627 "traddr": "192.168.100.8", 00:14:54.627 "trsvcid": "4420" 00:14:54.627 }, 00:14:54.627 "peer_address": { 00:14:54.627 "trtype": "RDMA", 00:14:54.627 "adrfam": "IPv4", 00:14:54.627 "traddr": "192.168.100.8", 00:14:54.627 "trsvcid": "38166" 00:14:54.627 }, 00:14:54.627 "auth": { 00:14:54.627 "state": "completed", 00:14:54.627 "digest": "sha256", 00:14:54.627 "dhgroup": "ffdhe6144" 00:14:54.627 } 00:14:54.627 } 00:14:54.627 ]' 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:54.627 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.886 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.886 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.886 10:03:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.886 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:14:55.453 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.712 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:55.970 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:55.970 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.970 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.971 10:03:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:56.229 00:14:56.229 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.229 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.229 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.487 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.487 { 00:14:56.487 "cntlid": 35, 00:14:56.487 "qid": 0, 00:14:56.487 "state": "enabled", 00:14:56.487 "thread": "nvmf_tgt_poll_group_000", 00:14:56.487 "listen_address": { 00:14:56.487 "trtype": "RDMA", 00:14:56.487 "adrfam": "IPv4", 00:14:56.487 "traddr": "192.168.100.8", 00:14:56.487 "trsvcid": "4420" 00:14:56.487 }, 00:14:56.487 "peer_address": { 00:14:56.488 "trtype": "RDMA", 00:14:56.488 "adrfam": "IPv4", 00:14:56.488 "traddr": "192.168.100.8", 00:14:56.488 "trsvcid": "46792" 00:14:56.488 }, 00:14:56.488 "auth": { 00:14:56.488 "state": "completed", 00:14:56.488 "digest": "sha256", 00:14:56.488 "dhgroup": "ffdhe6144" 00:14:56.488 } 00:14:56.488 } 00:14:56.488 ]' 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.488 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.746 10:03:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:14:57.312 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.571 10:03:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:58.138 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:58.138 { 00:14:58.138 "cntlid": 37, 00:14:58.138 "qid": 0, 00:14:58.138 "state": "enabled", 00:14:58.138 "thread": "nvmf_tgt_poll_group_000", 00:14:58.138 "listen_address": { 00:14:58.138 "trtype": "RDMA", 00:14:58.138 "adrfam": "IPv4", 00:14:58.138 "traddr": "192.168.100.8", 00:14:58.138 "trsvcid": "4420" 00:14:58.138 }, 00:14:58.138 "peer_address": { 00:14:58.138 "trtype": "RDMA", 00:14:58.138 "adrfam": "IPv4", 00:14:58.138 "traddr": "192.168.100.8", 00:14:58.138 "trsvcid": "34019" 00:14:58.138 }, 00:14:58.138 "auth": { 00:14:58.138 "state": "completed", 00:14:58.138 "digest": "sha256", 00:14:58.138 "dhgroup": "ffdhe6144" 00:14:58.138 } 00:14:58.138 } 00:14:58.138 ]' 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.138 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.395 10:03:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.329 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.330 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:59.896 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.896 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.896 { 00:14:59.896 "cntlid": 39, 00:14:59.896 "qid": 0, 00:14:59.896 "state": "enabled", 00:14:59.896 "thread": "nvmf_tgt_poll_group_000", 00:14:59.896 "listen_address": { 00:14:59.896 "trtype": "RDMA", 00:14:59.896 "adrfam": "IPv4", 00:14:59.896 "traddr": "192.168.100.8", 00:14:59.897 "trsvcid": "4420" 00:14:59.897 }, 00:14:59.897 "peer_address": { 00:14:59.897 "trtype": "RDMA", 00:14:59.897 "adrfam": "IPv4", 00:14:59.897 "traddr": "192.168.100.8", 00:14:59.897 "trsvcid": "44151" 00:14:59.897 }, 00:14:59.897 "auth": { 00:14:59.897 "state": "completed", 00:14:59.897 "digest": "sha256", 00:14:59.897 "dhgroup": "ffdhe6144" 00:14:59.897 } 00:14:59.897 } 00:14:59.897 ]' 00:14:59.897 10:03:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.897 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.897 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.154 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:01.089 10:03:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.089 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:01.653 00:15:01.653 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.653 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.653 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.910 { 00:15:01.910 "cntlid": 41, 00:15:01.910 "qid": 0, 00:15:01.910 "state": "enabled", 00:15:01.910 "thread": "nvmf_tgt_poll_group_000", 00:15:01.910 "listen_address": { 00:15:01.910 "trtype": "RDMA", 00:15:01.910 "adrfam": "IPv4", 00:15:01.910 "traddr": "192.168.100.8", 00:15:01.910 "trsvcid": "4420" 00:15:01.910 }, 00:15:01.910 "peer_address": { 00:15:01.910 "trtype": "RDMA", 00:15:01.910 "adrfam": "IPv4", 00:15:01.910 "traddr": "192.168.100.8", 00:15:01.910 "trsvcid": "45805" 00:15:01.910 }, 00:15:01.910 "auth": { 00:15:01.910 "state": "completed", 00:15:01.910 "digest": "sha256", 00:15:01.910 "dhgroup": "ffdhe8192" 00:15:01.910 } 00:15:01.910 } 00:15:01.910 ]' 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:01.910 10:03:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.910 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.910 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.910 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.168 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:02.733 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.992 10:03:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.992 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.250 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.250 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.250 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:03.510 00:15:03.510 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.510 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.510 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:03.823 { 00:15:03.823 "cntlid": 43, 00:15:03.823 "qid": 0, 00:15:03.823 "state": "enabled", 00:15:03.823 "thread": "nvmf_tgt_poll_group_000", 00:15:03.823 "listen_address": { 00:15:03.823 "trtype": "RDMA", 00:15:03.823 "adrfam": "IPv4", 00:15:03.823 "traddr": "192.168.100.8", 00:15:03.823 "trsvcid": "4420" 00:15:03.823 }, 00:15:03.823 "peer_address": { 00:15:03.823 "trtype": "RDMA", 00:15:03.823 "adrfam": "IPv4", 00:15:03.823 "traddr": "192.168.100.8", 00:15:03.823 "trsvcid": "42759" 00:15:03.823 }, 00:15:03.823 "auth": { 00:15:03.823 "state": "completed", 00:15:03.823 "digest": "sha256", 00:15:03.823 "dhgroup": "ffdhe8192" 00:15:03.823 } 00:15:03.823 } 00:15:03.823 ]' 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:03.823 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:03.824 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:03.824 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.824 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.824 10:03:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.081 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:04.649 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:04.908 10:03:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.165 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:05.444 00:15:05.444 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:05.444 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:05.444 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.701 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.701 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:05.702 { 00:15:05.702 "cntlid": 45, 00:15:05.702 "qid": 0, 00:15:05.702 "state": "enabled", 00:15:05.702 "thread": "nvmf_tgt_poll_group_000", 00:15:05.702 "listen_address": { 00:15:05.702 "trtype": "RDMA", 00:15:05.702 "adrfam": "IPv4", 00:15:05.702 "traddr": "192.168.100.8", 00:15:05.702 "trsvcid": "4420" 00:15:05.702 }, 00:15:05.702 "peer_address": { 00:15:05.702 "trtype": "RDMA", 00:15:05.702 "adrfam": "IPv4", 00:15:05.702 "traddr": "192.168.100.8", 00:15:05.702 "trsvcid": "55696" 00:15:05.702 }, 00:15:05.702 "auth": { 00:15:05.702 "state": "completed", 00:15:05.702 "digest": "sha256", 00:15:05.702 "dhgroup": "ffdhe8192" 00:15:05.702 } 00:15:05.702 } 00:15:05.702 ]' 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:05.702 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:05.960 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.960 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.960 10:03:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.960 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.893 10:03:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.893 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:07.460 00:15:07.460 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:07.460 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:07.460 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.719 { 00:15:07.719 "cntlid": 47, 00:15:07.719 "qid": 0, 00:15:07.719 "state": "enabled", 00:15:07.719 "thread": "nvmf_tgt_poll_group_000", 00:15:07.719 "listen_address": { 00:15:07.719 "trtype": "RDMA", 00:15:07.719 "adrfam": "IPv4", 00:15:07.719 "traddr": "192.168.100.8", 00:15:07.719 "trsvcid": "4420" 00:15:07.719 }, 00:15:07.719 "peer_address": { 00:15:07.719 "trtype": "RDMA", 00:15:07.719 "adrfam": "IPv4", 00:15:07.719 "traddr": "192.168.100.8", 00:15:07.719 "trsvcid": "37818" 00:15:07.719 }, 00:15:07.719 "auth": { 00:15:07.719 "state": "completed", 00:15:07.719 "digest": "sha256", 00:15:07.719 "dhgroup": "ffdhe8192" 00:15:07.719 } 00:15:07.719 } 00:15:07.719 ]' 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.719 10:03:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.978 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:08.546 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.804 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.804 10:03:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.063 00:15:09.063 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.063 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.063 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.322 { 00:15:09.322 "cntlid": 49, 00:15:09.322 "qid": 0, 00:15:09.322 "state": "enabled", 00:15:09.322 "thread": "nvmf_tgt_poll_group_000", 00:15:09.322 "listen_address": { 00:15:09.322 "trtype": "RDMA", 00:15:09.322 "adrfam": "IPv4", 00:15:09.322 "traddr": "192.168.100.8", 00:15:09.322 "trsvcid": "4420" 00:15:09.322 }, 00:15:09.322 "peer_address": { 00:15:09.322 "trtype": "RDMA", 00:15:09.322 "adrfam": "IPv4", 00:15:09.322 "traddr": "192.168.100.8", 00:15:09.322 "trsvcid": "32955" 00:15:09.322 }, 00:15:09.322 "auth": { 00:15:09.322 "state": "completed", 00:15:09.322 "digest": "sha384", 00:15:09.322 "dhgroup": "null" 00:15:09.322 } 00:15:09.322 } 00:15:09.322 ]' 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.322 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.582 10:03:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:10.149 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.408 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.667 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:10.667 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:10.926 { 00:15:10.926 "cntlid": 51, 00:15:10.926 "qid": 0, 00:15:10.926 "state": "enabled", 00:15:10.926 "thread": "nvmf_tgt_poll_group_000", 00:15:10.926 "listen_address": { 00:15:10.926 "trtype": "RDMA", 00:15:10.926 "adrfam": "IPv4", 00:15:10.926 "traddr": "192.168.100.8", 00:15:10.926 "trsvcid": "4420" 00:15:10.926 }, 00:15:10.926 "peer_address": { 00:15:10.926 "trtype": "RDMA", 00:15:10.926 "adrfam": "IPv4", 00:15:10.926 "traddr": "192.168.100.8", 00:15:10.926 "trsvcid": "37489" 00:15:10.926 }, 00:15:10.926 "auth": { 00:15:10.926 "state": "completed", 00:15:10.926 "digest": "sha384", 00:15:10.926 "dhgroup": "null" 00:15:10.926 } 00:15:10.926 } 00:15:10.926 ]' 00:15:10.926 10:03:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:10.926 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:10.926 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:10.926 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:10.927 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.185 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.185 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.185 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.185 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:12.122 10:03:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.122 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:12.381 00:15:12.381 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:12.381 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:12.381 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.640 { 00:15:12.640 "cntlid": 53, 00:15:12.640 "qid": 0, 00:15:12.640 "state": "enabled", 00:15:12.640 "thread": "nvmf_tgt_poll_group_000", 00:15:12.640 "listen_address": { 00:15:12.640 "trtype": "RDMA", 00:15:12.640 "adrfam": "IPv4", 00:15:12.640 "traddr": "192.168.100.8", 00:15:12.640 "trsvcid": "4420" 00:15:12.640 }, 00:15:12.640 "peer_address": { 00:15:12.640 "trtype": "RDMA", 00:15:12.640 "adrfam": "IPv4", 00:15:12.640 "traddr": "192.168.100.8", 00:15:12.640 "trsvcid": "35882" 00:15:12.640 }, 00:15:12.640 "auth": { 00:15:12.640 "state": "completed", 00:15:12.640 "digest": "sha384", 00:15:12.640 "dhgroup": "null" 00:15:12.640 } 00:15:12.640 } 00:15:12.640 ]' 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.640 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.898 10:03:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:13.465 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.724 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.983 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.983 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.983 10:03:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:13.983 00:15:13.983 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.983 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.983 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.242 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.242 { 00:15:14.242 "cntlid": 55, 00:15:14.242 "qid": 0, 00:15:14.242 "state": "enabled", 00:15:14.242 "thread": "nvmf_tgt_poll_group_000", 00:15:14.242 "listen_address": { 00:15:14.242 "trtype": "RDMA", 00:15:14.243 "adrfam": "IPv4", 00:15:14.243 "traddr": "192.168.100.8", 00:15:14.243 "trsvcid": "4420" 00:15:14.243 }, 00:15:14.243 "peer_address": { 00:15:14.243 "trtype": "RDMA", 00:15:14.243 "adrfam": "IPv4", 00:15:14.243 "traddr": "192.168.100.8", 00:15:14.243 "trsvcid": "33511" 00:15:14.243 }, 00:15:14.243 "auth": { 00:15:14.243 "state": "completed", 00:15:14.243 "digest": "sha384", 00:15:14.243 "dhgroup": "null" 00:15:14.243 } 00:15:14.243 } 00:15:14.243 ]' 00:15:14.243 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.243 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:14.243 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.243 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:14.502 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.502 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.502 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.502 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.502 10:03:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:15.070 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.329 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.589 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.589 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.848 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:15.848 { 00:15:15.848 "cntlid": 57, 00:15:15.848 "qid": 0, 00:15:15.848 "state": "enabled", 00:15:15.848 "thread": "nvmf_tgt_poll_group_000", 00:15:15.848 "listen_address": { 00:15:15.848 "trtype": "RDMA", 00:15:15.848 "adrfam": "IPv4", 00:15:15.849 "traddr": "192.168.100.8", 00:15:15.849 "trsvcid": "4420" 00:15:15.849 }, 00:15:15.849 "peer_address": { 00:15:15.849 "trtype": "RDMA", 00:15:15.849 "adrfam": "IPv4", 00:15:15.849 "traddr": "192.168.100.8", 00:15:15.849 "trsvcid": "45659" 00:15:15.849 }, 00:15:15.849 "auth": { 00:15:15.849 "state": "completed", 00:15:15.849 "digest": "sha384", 00:15:15.849 "dhgroup": "ffdhe2048" 00:15:15.849 } 00:15:15.849 } 00:15:15.849 ]' 00:15:15.849 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:15.849 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:15.849 10:04:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.108 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.044 10:04:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.044 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.303 00:15:17.303 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.303 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.303 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.562 { 00:15:17.562 "cntlid": 59, 00:15:17.562 "qid": 0, 00:15:17.562 "state": "enabled", 00:15:17.562 "thread": "nvmf_tgt_poll_group_000", 00:15:17.562 "listen_address": { 00:15:17.562 "trtype": "RDMA", 00:15:17.562 "adrfam": "IPv4", 00:15:17.562 "traddr": "192.168.100.8", 00:15:17.562 "trsvcid": "4420" 00:15:17.562 }, 00:15:17.562 "peer_address": { 00:15:17.562 "trtype": "RDMA", 00:15:17.562 "adrfam": "IPv4", 00:15:17.562 "traddr": "192.168.100.8", 00:15:17.562 "trsvcid": "57864" 00:15:17.562 }, 00:15:17.562 "auth": { 00:15:17.562 "state": "completed", 00:15:17.562 "digest": "sha384", 00:15:17.562 "dhgroup": "ffdhe2048" 00:15:17.562 } 00:15:17.562 } 00:15:17.562 ]' 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.562 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.821 10:04:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:18.433 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:18.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.691 10:04:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:18.950 00:15:18.950 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:18.950 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.950 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.209 { 00:15:19.209 "cntlid": 61, 00:15:19.209 "qid": 0, 00:15:19.209 "state": "enabled", 00:15:19.209 "thread": "nvmf_tgt_poll_group_000", 00:15:19.209 "listen_address": { 00:15:19.209 "trtype": "RDMA", 00:15:19.209 "adrfam": "IPv4", 00:15:19.209 "traddr": "192.168.100.8", 00:15:19.209 "trsvcid": "4420" 00:15:19.209 }, 00:15:19.209 "peer_address": { 00:15:19.209 "trtype": "RDMA", 00:15:19.209 "adrfam": "IPv4", 00:15:19.209 "traddr": "192.168.100.8", 00:15:19.209 "trsvcid": "35918" 00:15:19.209 }, 00:15:19.209 "auth": { 00:15:19.209 "state": "completed", 00:15:19.209 "digest": "sha384", 00:15:19.209 "dhgroup": "ffdhe2048" 00:15:19.209 } 00:15:19.209 } 00:15:19.209 ]' 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:19.209 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:19.467 10:04:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:20.033 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:20.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.293 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.552 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.552 00:15:20.552 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.552 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.552 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.810 { 00:15:20.810 "cntlid": 63, 00:15:20.810 "qid": 0, 00:15:20.810 "state": "enabled", 00:15:20.810 "thread": "nvmf_tgt_poll_group_000", 00:15:20.810 "listen_address": { 00:15:20.810 "trtype": "RDMA", 00:15:20.810 "adrfam": "IPv4", 00:15:20.810 "traddr": "192.168.100.8", 00:15:20.810 "trsvcid": "4420" 00:15:20.810 }, 00:15:20.810 "peer_address": { 00:15:20.810 "trtype": "RDMA", 00:15:20.810 "adrfam": "IPv4", 00:15:20.810 "traddr": "192.168.100.8", 00:15:20.810 "trsvcid": "43084" 00:15:20.810 }, 00:15:20.810 "auth": { 00:15:20.810 "state": "completed", 00:15:20.810 "digest": "sha384", 00:15:20.810 "dhgroup": "ffdhe2048" 00:15:20.810 } 00:15:20.810 } 00:15:20.810 ]' 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:20.810 10:04:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:21.068 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:21.068 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:21.068 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:21.068 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:21.634 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:21.892 10:04:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.150 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.409 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.409 { 00:15:22.409 "cntlid": 65, 00:15:22.409 "qid": 0, 00:15:22.409 "state": "enabled", 00:15:22.409 "thread": "nvmf_tgt_poll_group_000", 00:15:22.409 "listen_address": { 00:15:22.409 "trtype": "RDMA", 00:15:22.409 "adrfam": "IPv4", 00:15:22.409 "traddr": "192.168.100.8", 00:15:22.409 "trsvcid": "4420" 00:15:22.409 }, 00:15:22.409 "peer_address": { 00:15:22.409 "trtype": "RDMA", 00:15:22.409 "adrfam": "IPv4", 00:15:22.409 "traddr": "192.168.100.8", 00:15:22.409 "trsvcid": "37088" 00:15:22.409 }, 00:15:22.409 "auth": { 00:15:22.409 "state": "completed", 00:15:22.409 "digest": "sha384", 00:15:22.409 "dhgroup": "ffdhe3072" 00:15:22.409 } 00:15:22.409 } 00:15:22.409 ]' 00:15:22.409 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.668 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.926 10:04:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.493 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.493 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:23.751 10:04:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.010 00:15:24.010 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.010 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.010 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.268 { 00:15:24.268 "cntlid": 67, 00:15:24.268 "qid": 0, 00:15:24.268 "state": "enabled", 00:15:24.268 "thread": "nvmf_tgt_poll_group_000", 00:15:24.268 "listen_address": { 00:15:24.268 "trtype": "RDMA", 00:15:24.268 "adrfam": "IPv4", 00:15:24.268 "traddr": "192.168.100.8", 00:15:24.268 "trsvcid": "4420" 00:15:24.268 }, 00:15:24.268 "peer_address": { 00:15:24.268 "trtype": "RDMA", 00:15:24.268 "adrfam": "IPv4", 00:15:24.268 "traddr": "192.168.100.8", 00:15:24.268 "trsvcid": "52320" 00:15:24.268 }, 00:15:24.268 "auth": { 00:15:24.268 "state": "completed", 00:15:24.268 "digest": "sha384", 00:15:24.268 "dhgroup": "ffdhe3072" 00:15:24.268 } 00:15:24.268 } 00:15:24.268 ]' 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.268 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.269 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:24.269 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.269 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.269 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.269 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.527 10:04:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:25.093 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.351 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:25.609 00:15:25.609 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:25.609 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:25.609 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.868 { 00:15:25.868 "cntlid": 69, 00:15:25.868 "qid": 0, 00:15:25.868 "state": "enabled", 00:15:25.868 "thread": "nvmf_tgt_poll_group_000", 00:15:25.868 "listen_address": { 00:15:25.868 "trtype": "RDMA", 00:15:25.868 "adrfam": "IPv4", 00:15:25.868 "traddr": "192.168.100.8", 00:15:25.868 "trsvcid": "4420" 00:15:25.868 }, 00:15:25.868 "peer_address": { 00:15:25.868 "trtype": "RDMA", 00:15:25.868 "adrfam": "IPv4", 00:15:25.868 "traddr": "192.168.100.8", 00:15:25.868 "trsvcid": "54410" 00:15:25.868 }, 00:15:25.868 "auth": { 00:15:25.868 "state": "completed", 00:15:25.868 "digest": "sha384", 00:15:25.868 "dhgroup": "ffdhe3072" 00:15:25.868 } 00:15:25.868 } 00:15:25.868 ]' 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.868 10:04:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.868 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:25.868 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.127 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.127 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.127 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.127 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.064 10:04:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.064 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:27.323 00:15:27.323 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.323 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.323 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.582 { 00:15:27.582 "cntlid": 71, 00:15:27.582 "qid": 0, 00:15:27.582 "state": "enabled", 00:15:27.582 "thread": "nvmf_tgt_poll_group_000", 00:15:27.582 "listen_address": { 00:15:27.582 "trtype": "RDMA", 00:15:27.582 "adrfam": "IPv4", 00:15:27.582 "traddr": "192.168.100.8", 00:15:27.582 "trsvcid": "4420" 00:15:27.582 }, 00:15:27.582 "peer_address": { 00:15:27.582 "trtype": "RDMA", 00:15:27.582 "adrfam": "IPv4", 00:15:27.582 "traddr": "192.168.100.8", 00:15:27.582 "trsvcid": "47540" 00:15:27.582 }, 00:15:27.582 "auth": { 00:15:27.582 "state": "completed", 00:15:27.582 "digest": "sha384", 00:15:27.582 "dhgroup": "ffdhe3072" 00:15:27.582 } 00:15:27.582 } 00:15:27.582 ]' 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.582 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.841 10:04:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:28.410 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.668 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.668 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.928 10:04:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:29.187 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.187 { 00:15:29.187 "cntlid": 73, 00:15:29.187 "qid": 0, 00:15:29.187 "state": "enabled", 00:15:29.187 "thread": "nvmf_tgt_poll_group_000", 00:15:29.187 "listen_address": { 00:15:29.187 "trtype": "RDMA", 00:15:29.187 "adrfam": "IPv4", 00:15:29.187 "traddr": "192.168.100.8", 00:15:29.187 "trsvcid": "4420" 00:15:29.187 }, 00:15:29.187 "peer_address": { 00:15:29.187 "trtype": "RDMA", 00:15:29.187 "adrfam": "IPv4", 00:15:29.187 "traddr": "192.168.100.8", 00:15:29.187 "trsvcid": "52051" 00:15:29.187 }, 00:15:29.187 "auth": { 00:15:29.187 "state": "completed", 00:15:29.187 "digest": "sha384", 00:15:29.187 "dhgroup": "ffdhe4096" 00:15:29.187 } 00:15:29.187 } 00:15:29.187 ]' 00:15:29.187 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.445 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.703 10:04:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.270 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:30.529 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:30.529 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.530 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.788 00:15:30.788 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.788 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.788 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.047 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.047 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.047 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.048 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.048 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.048 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:31.048 { 00:15:31.048 "cntlid": 75, 00:15:31.048 "qid": 0, 00:15:31.048 "state": "enabled", 00:15:31.048 "thread": "nvmf_tgt_poll_group_000", 00:15:31.048 "listen_address": { 00:15:31.048 "trtype": "RDMA", 00:15:31.048 "adrfam": "IPv4", 00:15:31.048 "traddr": "192.168.100.8", 00:15:31.048 "trsvcid": "4420" 00:15:31.048 }, 00:15:31.048 "peer_address": { 00:15:31.048 "trtype": "RDMA", 00:15:31.048 "adrfam": "IPv4", 00:15:31.048 "traddr": "192.168.100.8", 00:15:31.048 "trsvcid": "43085" 00:15:31.048 }, 00:15:31.048 "auth": { 00:15:31.048 "state": "completed", 00:15:31.048 "digest": "sha384", 00:15:31.048 "dhgroup": "ffdhe4096" 00:15:31.048 } 00:15:31.048 } 00:15:31.048 ]' 00:15:31.048 10:04:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.048 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.306 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:31.875 10:04:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:31.875 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.172 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.431 00:15:32.431 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.431 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.431 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.689 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.689 { 00:15:32.689 "cntlid": 77, 00:15:32.689 "qid": 0, 00:15:32.689 "state": "enabled", 00:15:32.689 "thread": "nvmf_tgt_poll_group_000", 00:15:32.689 "listen_address": { 00:15:32.689 "trtype": "RDMA", 00:15:32.690 "adrfam": "IPv4", 00:15:32.690 "traddr": "192.168.100.8", 00:15:32.690 "trsvcid": "4420" 00:15:32.690 }, 00:15:32.690 "peer_address": { 00:15:32.690 "trtype": "RDMA", 00:15:32.690 "adrfam": "IPv4", 00:15:32.690 "traddr": "192.168.100.8", 00:15:32.690 "trsvcid": "59554" 00:15:32.690 }, 00:15:32.690 "auth": { 00:15:32.690 "state": "completed", 00:15:32.690 "digest": "sha384", 00:15:32.690 "dhgroup": "ffdhe4096" 00:15:32.690 } 00:15:32.690 } 00:15:32.690 ]' 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.690 10:04:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.948 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:33.514 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.771 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:33.771 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.030 10:04:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.288 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.288 { 00:15:34.288 "cntlid": 79, 00:15:34.288 "qid": 0, 00:15:34.288 "state": "enabled", 00:15:34.288 "thread": "nvmf_tgt_poll_group_000", 00:15:34.288 "listen_address": { 00:15:34.288 "trtype": "RDMA", 00:15:34.288 "adrfam": "IPv4", 00:15:34.288 "traddr": "192.168.100.8", 00:15:34.288 "trsvcid": "4420" 00:15:34.288 }, 00:15:34.288 "peer_address": { 00:15:34.288 "trtype": "RDMA", 00:15:34.288 "adrfam": "IPv4", 00:15:34.288 "traddr": "192.168.100.8", 00:15:34.288 "trsvcid": "54551" 00:15:34.288 }, 00:15:34.288 "auth": { 00:15:34.288 "state": "completed", 00:15:34.288 "digest": "sha384", 00:15:34.288 "dhgroup": "ffdhe4096" 00:15:34.288 } 00:15:34.288 } 00:15:34.288 ]' 00:15:34.288 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.547 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.806 10:04:19 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.372 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.631 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:35.889 00:15:35.889 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.889 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.889 10:04:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:36.148 { 00:15:36.148 "cntlid": 81, 00:15:36.148 "qid": 0, 00:15:36.148 "state": "enabled", 00:15:36.148 "thread": "nvmf_tgt_poll_group_000", 00:15:36.148 "listen_address": { 00:15:36.148 "trtype": "RDMA", 00:15:36.148 "adrfam": "IPv4", 00:15:36.148 "traddr": "192.168.100.8", 00:15:36.148 "trsvcid": "4420" 00:15:36.148 }, 00:15:36.148 "peer_address": { 00:15:36.148 "trtype": "RDMA", 00:15:36.148 "adrfam": "IPv4", 00:15:36.148 "traddr": "192.168.100.8", 00:15:36.148 "trsvcid": "37588" 00:15:36.148 }, 00:15:36.148 "auth": { 00:15:36.148 "state": "completed", 00:15:36.148 "digest": "sha384", 00:15:36.148 "dhgroup": "ffdhe6144" 00:15:36.148 } 00:15:36.148 } 00:15:36.148 ]' 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.148 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:36.407 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.407 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.407 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.407 10:04:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:36.974 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.233 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.491 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.492 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:37.750 00:15:37.750 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.750 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.750 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:38.009 { 00:15:38.009 "cntlid": 83, 00:15:38.009 "qid": 0, 00:15:38.009 "state": "enabled", 00:15:38.009 "thread": "nvmf_tgt_poll_group_000", 00:15:38.009 "listen_address": { 00:15:38.009 "trtype": "RDMA", 00:15:38.009 "adrfam": "IPv4", 00:15:38.009 "traddr": "192.168.100.8", 00:15:38.009 "trsvcid": "4420" 00:15:38.009 }, 00:15:38.009 "peer_address": { 00:15:38.009 "trtype": "RDMA", 00:15:38.009 "adrfam": "IPv4", 00:15:38.009 "traddr": "192.168.100.8", 00:15:38.009 "trsvcid": "37640" 00:15:38.009 }, 00:15:38.009 "auth": { 00:15:38.009 "state": "completed", 00:15:38.009 "digest": "sha384", 00:15:38.009 "dhgroup": "ffdhe6144" 00:15:38.009 } 00:15:38.009 } 00:15:38.009 ]' 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.009 10:04:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:38.009 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.009 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:38.009 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.009 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.009 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.268 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.832 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.833 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.833 10:04:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.091 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:39.349 00:15:39.349 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.349 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.349 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.608 { 00:15:39.608 "cntlid": 85, 00:15:39.608 "qid": 0, 00:15:39.608 "state": "enabled", 00:15:39.608 "thread": "nvmf_tgt_poll_group_000", 00:15:39.608 "listen_address": { 00:15:39.608 "trtype": "RDMA", 00:15:39.608 "adrfam": "IPv4", 00:15:39.608 "traddr": "192.168.100.8", 00:15:39.608 "trsvcid": "4420" 00:15:39.608 }, 00:15:39.608 "peer_address": { 00:15:39.608 "trtype": "RDMA", 00:15:39.608 "adrfam": "IPv4", 00:15:39.608 "traddr": "192.168.100.8", 00:15:39.608 "trsvcid": "46667" 00:15:39.608 }, 00:15:39.608 "auth": { 00:15:39.608 "state": "completed", 00:15:39.608 "digest": "sha384", 00:15:39.608 "dhgroup": "ffdhe6144" 00:15:39.608 } 00:15:39.608 } 00:15:39.608 ]' 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.608 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.867 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.867 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.867 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.867 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.867 10:04:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.867 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:40.800 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:40.801 10:04:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:41.369 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:41.369 { 00:15:41.369 "cntlid": 87, 00:15:41.369 "qid": 0, 00:15:41.369 "state": "enabled", 00:15:41.369 "thread": "nvmf_tgt_poll_group_000", 00:15:41.369 "listen_address": { 00:15:41.369 "trtype": "RDMA", 00:15:41.369 "adrfam": "IPv4", 00:15:41.369 "traddr": "192.168.100.8", 00:15:41.369 "trsvcid": "4420" 00:15:41.369 }, 00:15:41.369 "peer_address": { 00:15:41.369 "trtype": "RDMA", 00:15:41.369 "adrfam": "IPv4", 00:15:41.369 "traddr": "192.168.100.8", 00:15:41.369 "trsvcid": "56798" 00:15:41.369 }, 00:15:41.369 "auth": { 00:15:41.369 "state": "completed", 00:15:41.369 "digest": "sha384", 00:15:41.369 "dhgroup": "ffdhe6144" 00:15:41.369 } 00:15:41.369 } 00:15:41.369 ]' 00:15:41.369 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:41.627 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.886 10:04:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:42.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.478 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.737 10:04:27 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:43.303 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:43.303 { 00:15:43.303 "cntlid": 89, 00:15:43.303 "qid": 0, 00:15:43.303 "state": "enabled", 00:15:43.303 "thread": "nvmf_tgt_poll_group_000", 00:15:43.303 "listen_address": { 00:15:43.303 "trtype": "RDMA", 00:15:43.303 "adrfam": "IPv4", 00:15:43.303 "traddr": "192.168.100.8", 00:15:43.303 "trsvcid": "4420" 00:15:43.303 }, 00:15:43.303 "peer_address": { 00:15:43.303 "trtype": "RDMA", 00:15:43.303 "adrfam": "IPv4", 00:15:43.303 "traddr": "192.168.100.8", 00:15:43.303 "trsvcid": "34371" 00:15:43.303 }, 00:15:43.303 "auth": { 00:15:43.303 "state": "completed", 00:15:43.303 "digest": "sha384", 00:15:43.303 "dhgroup": "ffdhe8192" 00:15:43.303 } 00:15:43.303 } 00:15:43.303 ]' 00:15:43.303 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.561 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.819 10:04:28 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:44.385 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.385 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.385 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.386 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.644 10:04:29 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.210 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.210 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.210 { 00:15:45.210 "cntlid": 91, 00:15:45.211 "qid": 0, 00:15:45.211 "state": "enabled", 00:15:45.211 "thread": "nvmf_tgt_poll_group_000", 00:15:45.211 "listen_address": { 00:15:45.211 "trtype": "RDMA", 00:15:45.211 "adrfam": "IPv4", 00:15:45.211 "traddr": "192.168.100.8", 00:15:45.211 "trsvcid": "4420" 00:15:45.211 }, 00:15:45.211 "peer_address": { 00:15:45.211 "trtype": "RDMA", 00:15:45.211 "adrfam": "IPv4", 00:15:45.211 "traddr": "192.168.100.8", 00:15:45.211 "trsvcid": "33073" 00:15:45.211 }, 00:15:45.211 "auth": { 00:15:45.211 "state": "completed", 00:15:45.211 "digest": "sha384", 00:15:45.211 "dhgroup": "ffdhe8192" 00:15:45.211 } 00:15:45.211 } 00:15:45.211 ]' 00:15:45.211 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.211 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.211 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.470 10:04:30 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.412 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.413 10:04:31 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.016 00:15:47.016 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.016 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.016 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.274 { 00:15:47.274 "cntlid": 93, 00:15:47.274 "qid": 0, 00:15:47.274 "state": "enabled", 00:15:47.274 "thread": "nvmf_tgt_poll_group_000", 00:15:47.274 "listen_address": { 00:15:47.274 "trtype": "RDMA", 00:15:47.274 "adrfam": "IPv4", 00:15:47.274 "traddr": "192.168.100.8", 00:15:47.274 "trsvcid": "4420" 00:15:47.274 }, 00:15:47.274 "peer_address": { 00:15:47.274 "trtype": "RDMA", 00:15:47.274 "adrfam": "IPv4", 00:15:47.274 "traddr": "192.168.100.8", 00:15:47.274 "trsvcid": "49660" 00:15:47.274 }, 00:15:47.274 "auth": { 00:15:47.274 "state": "completed", 00:15:47.274 "digest": "sha384", 00:15:47.274 "dhgroup": "ffdhe8192" 00:15:47.274 } 00:15:47.274 } 00:15:47.274 ]' 00:15:47.274 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.275 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.533 10:04:32 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:48.100 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.360 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:48.930 00:15:48.930 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.930 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.930 10:04:33 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.189 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:49.189 { 00:15:49.189 "cntlid": 95, 00:15:49.189 "qid": 0, 00:15:49.189 "state": "enabled", 00:15:49.190 "thread": "nvmf_tgt_poll_group_000", 00:15:49.190 "listen_address": { 00:15:49.190 "trtype": "RDMA", 00:15:49.190 "adrfam": "IPv4", 00:15:49.190 "traddr": "192.168.100.8", 00:15:49.190 "trsvcid": "4420" 00:15:49.190 }, 00:15:49.190 "peer_address": { 00:15:49.190 "trtype": "RDMA", 00:15:49.190 "adrfam": "IPv4", 00:15:49.190 "traddr": "192.168.100.8", 00:15:49.190 "trsvcid": "49866" 00:15:49.190 }, 00:15:49.190 "auth": { 00:15:49.190 "state": "completed", 00:15:49.190 "digest": "sha384", 00:15:49.190 "dhgroup": "ffdhe8192" 00:15:49.190 } 00:15:49.190 } 00:15:49.190 ]' 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.190 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.448 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:50.016 10:04:34 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.016 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.016 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.275 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.534 00:15:50.534 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.534 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.534 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.793 { 00:15:50.793 "cntlid": 97, 00:15:50.793 "qid": 0, 00:15:50.793 "state": "enabled", 00:15:50.793 "thread": "nvmf_tgt_poll_group_000", 00:15:50.793 "listen_address": { 00:15:50.793 "trtype": "RDMA", 00:15:50.793 "adrfam": "IPv4", 00:15:50.793 "traddr": "192.168.100.8", 00:15:50.793 "trsvcid": "4420" 00:15:50.793 }, 00:15:50.793 "peer_address": { 00:15:50.793 "trtype": "RDMA", 00:15:50.793 "adrfam": "IPv4", 00:15:50.793 "traddr": "192.168.100.8", 00:15:50.793 "trsvcid": "50917" 00:15:50.793 }, 00:15:50.793 "auth": { 00:15:50.793 "state": "completed", 00:15:50.793 "digest": "sha512", 00:15:50.793 "dhgroup": "null" 00:15:50.793 } 00:15:50.793 } 00:15:50.793 ]' 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.793 10:04:35 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.052 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:51.619 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.619 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.619 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.619 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:51.878 10:04:36 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.137 00:15:52.137 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.137 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:52.137 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:52.396 { 00:15:52.396 "cntlid": 99, 00:15:52.396 "qid": 0, 00:15:52.396 "state": "enabled", 00:15:52.396 "thread": "nvmf_tgt_poll_group_000", 00:15:52.396 "listen_address": { 00:15:52.396 "trtype": "RDMA", 00:15:52.396 "adrfam": "IPv4", 00:15:52.396 "traddr": "192.168.100.8", 00:15:52.396 "trsvcid": "4420" 00:15:52.396 }, 00:15:52.396 "peer_address": { 00:15:52.396 "trtype": "RDMA", 00:15:52.396 "adrfam": "IPv4", 00:15:52.396 "traddr": "192.168.100.8", 00:15:52.396 "trsvcid": "51592" 00:15:52.396 }, 00:15:52.396 "auth": { 00:15:52.396 "state": "completed", 00:15:52.396 "digest": "sha512", 00:15:52.396 "dhgroup": "null" 00:15:52.396 } 00:15:52.396 } 00:15:52.396 ]' 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.396 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.655 10:04:37 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:53.223 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.481 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.741 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.741 00:15:54.000 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:54.000 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:54.000 10:04:38 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:54.000 { 00:15:54.000 "cntlid": 101, 00:15:54.000 "qid": 0, 00:15:54.000 "state": "enabled", 00:15:54.000 "thread": "nvmf_tgt_poll_group_000", 00:15:54.000 "listen_address": { 00:15:54.000 "trtype": "RDMA", 00:15:54.000 "adrfam": "IPv4", 00:15:54.000 "traddr": "192.168.100.8", 00:15:54.000 "trsvcid": "4420" 00:15:54.000 }, 00:15:54.000 "peer_address": { 00:15:54.000 "trtype": "RDMA", 00:15:54.000 "adrfam": "IPv4", 00:15:54.000 "traddr": "192.168.100.8", 00:15:54.000 "trsvcid": "39628" 00:15:54.000 }, 00:15:54.000 "auth": { 00:15:54.000 "state": "completed", 00:15:54.000 "digest": "sha512", 00:15:54.000 "dhgroup": "null" 00:15:54.000 } 00:15:54.000 } 00:15:54.000 ]' 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:54.000 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.260 10:04:39 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.196 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:55.455 00:15:55.455 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.455 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.455 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.714 { 00:15:55.714 "cntlid": 103, 00:15:55.714 "qid": 0, 00:15:55.714 "state": "enabled", 00:15:55.714 "thread": "nvmf_tgt_poll_group_000", 00:15:55.714 "listen_address": { 00:15:55.714 "trtype": "RDMA", 00:15:55.714 "adrfam": "IPv4", 00:15:55.714 "traddr": "192.168.100.8", 00:15:55.714 "trsvcid": "4420" 00:15:55.714 }, 00:15:55.714 "peer_address": { 00:15:55.714 "trtype": "RDMA", 00:15:55.714 "adrfam": "IPv4", 00:15:55.714 "traddr": "192.168.100.8", 00:15:55.714 "trsvcid": "39736" 00:15:55.714 }, 00:15:55.714 "auth": { 00:15:55.714 "state": "completed", 00:15:55.714 "digest": "sha512", 00:15:55.714 "dhgroup": "null" 00:15:55.714 } 00:15:55.714 } 00:15:55.714 ]' 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.714 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.973 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.973 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.973 10:04:40 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.973 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:15:56.554 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.813 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.072 10:04:41 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.072 00:15:57.072 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.072 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.072 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.331 { 00:15:57.331 "cntlid": 105, 00:15:57.331 "qid": 0, 00:15:57.331 "state": "enabled", 00:15:57.331 "thread": "nvmf_tgt_poll_group_000", 00:15:57.331 "listen_address": { 00:15:57.331 "trtype": "RDMA", 00:15:57.331 "adrfam": "IPv4", 00:15:57.331 "traddr": "192.168.100.8", 00:15:57.331 "trsvcid": "4420" 00:15:57.331 }, 00:15:57.331 "peer_address": { 00:15:57.331 "trtype": "RDMA", 00:15:57.331 "adrfam": "IPv4", 00:15:57.331 "traddr": "192.168.100.8", 00:15:57.331 "trsvcid": "56309" 00:15:57.331 }, 00:15:57.331 "auth": { 00:15:57.331 "state": "completed", 00:15:57.331 "digest": "sha512", 00:15:57.331 "dhgroup": "ffdhe2048" 00:15:57.331 } 00:15:57.331 } 00:15:57.331 ]' 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.331 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.590 10:04:42 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.525 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.783 00:15:58.783 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.783 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.783 10:04:43 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.041 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.042 { 00:15:59.042 "cntlid": 107, 00:15:59.042 "qid": 0, 00:15:59.042 "state": "enabled", 00:15:59.042 "thread": "nvmf_tgt_poll_group_000", 00:15:59.042 "listen_address": { 00:15:59.042 "trtype": "RDMA", 00:15:59.042 "adrfam": "IPv4", 00:15:59.042 "traddr": "192.168.100.8", 00:15:59.042 "trsvcid": "4420" 00:15:59.042 }, 00:15:59.042 "peer_address": { 00:15:59.042 "trtype": "RDMA", 00:15:59.042 "adrfam": "IPv4", 00:15:59.042 "traddr": "192.168.100.8", 00:15:59.042 "trsvcid": "57562" 00:15:59.042 }, 00:15:59.042 "auth": { 00:15:59.042 "state": "completed", 00:15:59.042 "digest": "sha512", 00:15:59.042 "dhgroup": "ffdhe2048" 00:15:59.042 } 00:15:59.042 } 00:15:59.042 ]' 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.042 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.301 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:15:59.869 10:04:44 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:00.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.127 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.387 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.387 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.646 { 00:16:00.646 "cntlid": 109, 00:16:00.646 "qid": 0, 00:16:00.646 "state": "enabled", 00:16:00.646 "thread": "nvmf_tgt_poll_group_000", 00:16:00.646 "listen_address": { 00:16:00.646 "trtype": "RDMA", 00:16:00.646 "adrfam": "IPv4", 00:16:00.646 "traddr": "192.168.100.8", 00:16:00.646 "trsvcid": "4420" 00:16:00.646 }, 00:16:00.646 "peer_address": { 00:16:00.646 "trtype": "RDMA", 00:16:00.646 "adrfam": "IPv4", 00:16:00.646 "traddr": "192.168.100.8", 00:16:00.646 "trsvcid": "57151" 00:16:00.646 }, 00:16:00.646 "auth": { 00:16:00.646 "state": "completed", 00:16:00.646 "digest": "sha512", 00:16:00.646 "dhgroup": "ffdhe2048" 00:16:00.646 } 00:16:00.646 } 00:16:00.646 ]' 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:00.646 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.905 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.905 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.905 10:04:45 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.905 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:16:01.474 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.733 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.733 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:01.991 10:04:46 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.250 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:02.250 { 00:16:02.250 "cntlid": 111, 00:16:02.250 "qid": 0, 00:16:02.250 "state": "enabled", 00:16:02.250 "thread": "nvmf_tgt_poll_group_000", 00:16:02.250 "listen_address": { 00:16:02.250 "trtype": "RDMA", 00:16:02.250 "adrfam": "IPv4", 00:16:02.250 "traddr": "192.168.100.8", 00:16:02.250 "trsvcid": "4420" 00:16:02.250 }, 00:16:02.250 "peer_address": { 00:16:02.250 "trtype": "RDMA", 00:16:02.250 "adrfam": "IPv4", 00:16:02.250 "traddr": "192.168.100.8", 00:16:02.250 "trsvcid": "57125" 00:16:02.250 }, 00:16:02.250 "auth": { 00:16:02.250 "state": "completed", 00:16:02.250 "digest": "sha512", 00:16:02.250 "dhgroup": "ffdhe2048" 00:16:02.250 } 00:16:02.250 } 00:16:02.250 ]' 00:16:02.250 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.508 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.876 10:04:47 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.446 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.446 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.705 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.963 00:16:03.963 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.963 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.963 10:04:48 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.963 { 00:16:03.963 "cntlid": 113, 00:16:03.963 "qid": 0, 00:16:03.963 "state": "enabled", 00:16:03.963 "thread": "nvmf_tgt_poll_group_000", 00:16:03.963 "listen_address": { 00:16:03.963 "trtype": "RDMA", 00:16:03.963 "adrfam": "IPv4", 00:16:03.963 "traddr": "192.168.100.8", 00:16:03.963 "trsvcid": "4420" 00:16:03.963 }, 00:16:03.963 "peer_address": { 00:16:03.963 "trtype": "RDMA", 00:16:03.963 "adrfam": "IPv4", 00:16:03.963 "traddr": "192.168.100.8", 00:16:03.963 "trsvcid": "32945" 00:16:03.963 }, 00:16:03.963 "auth": { 00:16:03.963 "state": "completed", 00:16:03.963 "digest": "sha512", 00:16:03.963 "dhgroup": "ffdhe3072" 00:16:03.963 } 00:16:03.963 } 00:16:03.963 ]' 00:16:03.963 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.221 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.222 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.480 10:04:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.047 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.306 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.564 00:16:05.564 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.564 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.564 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:05.823 { 00:16:05.823 "cntlid": 115, 00:16:05.823 "qid": 0, 00:16:05.823 "state": "enabled", 00:16:05.823 "thread": "nvmf_tgt_poll_group_000", 00:16:05.823 "listen_address": { 00:16:05.823 "trtype": "RDMA", 00:16:05.823 "adrfam": "IPv4", 00:16:05.823 "traddr": "192.168.100.8", 00:16:05.823 "trsvcid": "4420" 00:16:05.823 }, 00:16:05.823 "peer_address": { 00:16:05.823 "trtype": "RDMA", 00:16:05.823 "adrfam": "IPv4", 00:16:05.823 "traddr": "192.168.100.8", 00:16:05.823 "trsvcid": "44907" 00:16:05.823 }, 00:16:05.823 "auth": { 00:16:05.823 "state": "completed", 00:16:05.823 "digest": "sha512", 00:16:05.823 "dhgroup": "ffdhe3072" 00:16:05.823 } 00:16:05.823 } 00:16:05.823 ]' 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.823 10:04:50 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.081 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:16:06.647 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.905 10:04:51 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:06.905 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:06.905 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:06.905 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:06.905 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.906 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:07.164 00:16:07.164 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:07.164 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:07.164 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.422 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:07.422 { 00:16:07.422 "cntlid": 117, 00:16:07.422 "qid": 0, 00:16:07.422 "state": "enabled", 00:16:07.422 "thread": "nvmf_tgt_poll_group_000", 00:16:07.422 "listen_address": { 00:16:07.422 "trtype": "RDMA", 00:16:07.422 "adrfam": "IPv4", 00:16:07.422 "traddr": "192.168.100.8", 00:16:07.422 "trsvcid": "4420" 00:16:07.422 }, 00:16:07.422 "peer_address": { 00:16:07.422 "trtype": "RDMA", 00:16:07.422 "adrfam": "IPv4", 00:16:07.422 "traddr": "192.168.100.8", 00:16:07.422 "trsvcid": "34570" 00:16:07.422 }, 00:16:07.422 "auth": { 00:16:07.422 "state": "completed", 00:16:07.422 "digest": "sha512", 00:16:07.423 "dhgroup": "ffdhe3072" 00:16:07.423 } 00:16:07.423 } 00:16:07.423 ]' 00:16:07.423 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:07.423 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.423 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:07.423 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:07.423 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:07.681 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.681 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.681 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.681 10:04:52 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:16:08.249 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:08.507 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.508 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:08.766 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:08.766 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.766 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.766 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:08.766 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.767 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:09.025 00:16:09.025 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:09.025 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:09.025 10:04:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:09.025 { 00:16:09.025 "cntlid": 119, 00:16:09.025 "qid": 0, 00:16:09.025 "state": "enabled", 00:16:09.025 "thread": "nvmf_tgt_poll_group_000", 00:16:09.025 "listen_address": { 00:16:09.025 "trtype": "RDMA", 00:16:09.025 "adrfam": "IPv4", 00:16:09.025 "traddr": "192.168.100.8", 00:16:09.025 "trsvcid": "4420" 00:16:09.025 }, 00:16:09.025 "peer_address": { 00:16:09.025 "trtype": "RDMA", 00:16:09.025 "adrfam": "IPv4", 00:16:09.025 "traddr": "192.168.100.8", 00:16:09.025 "trsvcid": "50764" 00:16:09.025 }, 00:16:09.025 "auth": { 00:16:09.025 "state": "completed", 00:16:09.025 "digest": "sha512", 00:16:09.025 "dhgroup": "ffdhe3072" 00:16:09.025 } 00:16:09.025 } 00:16:09.025 ]' 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:09.025 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.284 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.542 10:04:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.110 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.369 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.627 00:16:10.627 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.628 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.628 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.886 { 00:16:10.886 "cntlid": 121, 00:16:10.886 "qid": 0, 00:16:10.886 "state": "enabled", 00:16:10.886 "thread": "nvmf_tgt_poll_group_000", 00:16:10.886 "listen_address": { 00:16:10.886 "trtype": "RDMA", 00:16:10.886 "adrfam": "IPv4", 00:16:10.886 "traddr": "192.168.100.8", 00:16:10.886 "trsvcid": "4420" 00:16:10.886 }, 00:16:10.886 "peer_address": { 00:16:10.886 "trtype": "RDMA", 00:16:10.886 "adrfam": "IPv4", 00:16:10.886 "traddr": "192.168.100.8", 00:16:10.886 "trsvcid": "49614" 00:16:10.886 }, 00:16:10.886 "auth": { 00:16:10.886 "state": "completed", 00:16:10.886 "digest": "sha512", 00:16:10.886 "dhgroup": "ffdhe4096" 00:16:10.886 } 00:16:10.886 } 00:16:10.886 ]' 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:10.886 10:04:55 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:10.886 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.886 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.886 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.145 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:16:11.712 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.970 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:11.970 10:04:56 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.229 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.487 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.487 { 00:16:12.487 "cntlid": 123, 00:16:12.487 "qid": 0, 00:16:12.487 "state": "enabled", 00:16:12.487 "thread": "nvmf_tgt_poll_group_000", 00:16:12.487 "listen_address": { 00:16:12.487 "trtype": "RDMA", 00:16:12.487 "adrfam": "IPv4", 00:16:12.487 "traddr": "192.168.100.8", 00:16:12.487 "trsvcid": "4420" 00:16:12.487 }, 00:16:12.487 "peer_address": { 00:16:12.487 "trtype": "RDMA", 00:16:12.487 "adrfam": "IPv4", 00:16:12.487 "traddr": "192.168.100.8", 00:16:12.487 "trsvcid": "38688" 00:16:12.487 }, 00:16:12.487 "auth": { 00:16:12.487 "state": "completed", 00:16:12.487 "digest": "sha512", 00:16:12.487 "dhgroup": "ffdhe4096" 00:16:12.487 } 00:16:12.487 } 00:16:12.487 ]' 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.487 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.488 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.746 10:04:57 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.682 10:04:58 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.940 00:16:13.940 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:13.940 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:13.940 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.198 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.198 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.199 { 00:16:14.199 "cntlid": 125, 00:16:14.199 "qid": 0, 00:16:14.199 "state": "enabled", 00:16:14.199 "thread": "nvmf_tgt_poll_group_000", 00:16:14.199 "listen_address": { 00:16:14.199 "trtype": "RDMA", 00:16:14.199 "adrfam": "IPv4", 00:16:14.199 "traddr": "192.168.100.8", 00:16:14.199 "trsvcid": "4420" 00:16:14.199 }, 00:16:14.199 "peer_address": { 00:16:14.199 "trtype": "RDMA", 00:16:14.199 "adrfam": "IPv4", 00:16:14.199 "traddr": "192.168.100.8", 00:16:14.199 "trsvcid": "35395" 00:16:14.199 }, 00:16:14.199 "auth": { 00:16:14.199 "state": "completed", 00:16:14.199 "digest": "sha512", 00:16:14.199 "dhgroup": "ffdhe4096" 00:16:14.199 } 00:16:14.199 } 00:16:14.199 ]' 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.199 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.457 10:04:59 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.391 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:15.649 00:16:15.649 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.650 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.650 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.908 { 00:16:15.908 "cntlid": 127, 00:16:15.908 "qid": 0, 00:16:15.908 "state": "enabled", 00:16:15.908 "thread": "nvmf_tgt_poll_group_000", 00:16:15.908 "listen_address": { 00:16:15.908 "trtype": "RDMA", 00:16:15.908 "adrfam": "IPv4", 00:16:15.908 "traddr": "192.168.100.8", 00:16:15.908 "trsvcid": "4420" 00:16:15.908 }, 00:16:15.908 "peer_address": { 00:16:15.908 "trtype": "RDMA", 00:16:15.908 "adrfam": "IPv4", 00:16:15.908 "traddr": "192.168.100.8", 00:16:15.908 "trsvcid": "51434" 00:16:15.908 }, 00:16:15.908 "auth": { 00:16:15.908 "state": "completed", 00:16:15.908 "digest": "sha512", 00:16:15.908 "dhgroup": "ffdhe4096" 00:16:15.908 } 00:16:15.908 } 00:16:15.908 ]' 00:16:15.908 10:05:00 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.908 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.908 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.908 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:15.908 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.167 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.167 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.167 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.167 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:16:16.733 10:05:01 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:16.991 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.250 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:17.508 00:16:17.508 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.508 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.508 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.766 { 00:16:17.766 "cntlid": 129, 00:16:17.766 "qid": 0, 00:16:17.766 "state": "enabled", 00:16:17.766 "thread": "nvmf_tgt_poll_group_000", 00:16:17.766 "listen_address": { 00:16:17.766 "trtype": "RDMA", 00:16:17.766 "adrfam": "IPv4", 00:16:17.766 "traddr": "192.168.100.8", 00:16:17.766 "trsvcid": "4420" 00:16:17.766 }, 00:16:17.766 "peer_address": { 00:16:17.766 "trtype": "RDMA", 00:16:17.766 "adrfam": "IPv4", 00:16:17.766 "traddr": "192.168.100.8", 00:16:17.766 "trsvcid": "51229" 00:16:17.766 }, 00:16:17.766 "auth": { 00:16:17.766 "state": "completed", 00:16:17.766 "digest": "sha512", 00:16:17.766 "dhgroup": "ffdhe6144" 00:16:17.766 } 00:16:17.766 } 00:16:17.766 ]' 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.766 10:05:02 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.025 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:16:18.593 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.850 10:05:03 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.850 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.850 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.850 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.108 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.108 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.108 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:19.366 00:16:19.366 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.366 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.366 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.625 { 00:16:19.625 "cntlid": 131, 00:16:19.625 "qid": 0, 00:16:19.625 "state": "enabled", 00:16:19.625 "thread": "nvmf_tgt_poll_group_000", 00:16:19.625 "listen_address": { 00:16:19.625 "trtype": "RDMA", 00:16:19.625 "adrfam": "IPv4", 00:16:19.625 "traddr": "192.168.100.8", 00:16:19.625 "trsvcid": "4420" 00:16:19.625 }, 00:16:19.625 "peer_address": { 00:16:19.625 "trtype": "RDMA", 00:16:19.625 "adrfam": "IPv4", 00:16:19.625 "traddr": "192.168.100.8", 00:16:19.625 "trsvcid": "47092" 00:16:19.625 }, 00:16:19.625 "auth": { 00:16:19.625 "state": "completed", 00:16:19.625 "digest": "sha512", 00:16:19.625 "dhgroup": "ffdhe6144" 00:16:19.625 } 00:16:19.625 } 00:16:19.625 ]' 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.625 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.883 10:05:04 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.450 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.709 10:05:05 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.968 00:16:20.968 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.968 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.968 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.227 { 00:16:21.227 "cntlid": 133, 00:16:21.227 "qid": 0, 00:16:21.227 "state": "enabled", 00:16:21.227 "thread": "nvmf_tgt_poll_group_000", 00:16:21.227 "listen_address": { 00:16:21.227 "trtype": "RDMA", 00:16:21.227 "adrfam": "IPv4", 00:16:21.227 "traddr": "192.168.100.8", 00:16:21.227 "trsvcid": "4420" 00:16:21.227 }, 00:16:21.227 "peer_address": { 00:16:21.227 "trtype": "RDMA", 00:16:21.227 "adrfam": "IPv4", 00:16:21.227 "traddr": "192.168.100.8", 00:16:21.227 "trsvcid": "56471" 00:16:21.227 }, 00:16:21.227 "auth": { 00:16:21.227 "state": "completed", 00:16:21.227 "digest": "sha512", 00:16:21.227 "dhgroup": "ffdhe6144" 00:16:21.227 } 00:16:21.227 } 00:16:21.227 ]' 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:21.227 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.486 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.486 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.486 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.486 10:05:06 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.420 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.679 00:16:22.937 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.937 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.937 10:05:07 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.937 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.937 { 00:16:22.937 "cntlid": 135, 00:16:22.937 "qid": 0, 00:16:22.937 "state": "enabled", 00:16:22.937 "thread": "nvmf_tgt_poll_group_000", 00:16:22.937 "listen_address": { 00:16:22.937 "trtype": "RDMA", 00:16:22.937 "adrfam": "IPv4", 00:16:22.937 "traddr": "192.168.100.8", 00:16:22.937 "trsvcid": "4420" 00:16:22.937 }, 00:16:22.937 "peer_address": { 00:16:22.937 "trtype": "RDMA", 00:16:22.937 "adrfam": "IPv4", 00:16:22.937 "traddr": "192.168.100.8", 00:16:22.937 "trsvcid": "56561" 00:16:22.937 }, 00:16:22.937 "auth": { 00:16:22.937 "state": "completed", 00:16:22.937 "digest": "sha512", 00:16:22.937 "dhgroup": "ffdhe6144" 00:16:22.937 } 00:16:22.937 } 00:16:22.937 ]' 00:16:22.938 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.938 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.938 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.197 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:16:24.144 10:05:08 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.144 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.144 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.711 00:16:24.711 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.711 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.711 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.970 { 00:16:24.970 "cntlid": 137, 00:16:24.970 "qid": 0, 00:16:24.970 "state": "enabled", 00:16:24.970 "thread": "nvmf_tgt_poll_group_000", 00:16:24.970 "listen_address": { 00:16:24.970 "trtype": "RDMA", 00:16:24.970 "adrfam": "IPv4", 00:16:24.970 "traddr": "192.168.100.8", 00:16:24.970 "trsvcid": "4420" 00:16:24.970 }, 00:16:24.970 "peer_address": { 00:16:24.970 "trtype": "RDMA", 00:16:24.970 "adrfam": "IPv4", 00:16:24.970 "traddr": "192.168.100.8", 00:16:24.970 "trsvcid": "39711" 00:16:24.970 }, 00:16:24.970 "auth": { 00:16:24.970 "state": "completed", 00:16:24.970 "digest": "sha512", 00:16:24.970 "dhgroup": "ffdhe8192" 00:16:24.970 } 00:16:24.970 } 00:16:24.970 ]' 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.970 10:05:09 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.970 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:24.970 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.970 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.970 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.970 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.229 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:16:25.795 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.053 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.053 10:05:10 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.053 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.620 00:16:26.620 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.620 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.620 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.878 { 00:16:26.878 "cntlid": 139, 00:16:26.878 "qid": 0, 00:16:26.878 "state": "enabled", 00:16:26.878 "thread": "nvmf_tgt_poll_group_000", 00:16:26.878 "listen_address": { 00:16:26.878 "trtype": "RDMA", 00:16:26.878 "adrfam": "IPv4", 00:16:26.878 "traddr": "192.168.100.8", 00:16:26.878 "trsvcid": "4420" 00:16:26.878 }, 00:16:26.878 "peer_address": { 00:16:26.878 "trtype": "RDMA", 00:16:26.878 "adrfam": "IPv4", 00:16:26.878 "traddr": "192.168.100.8", 00:16:26.878 "trsvcid": "46373" 00:16:26.878 }, 00:16:26.878 "auth": { 00:16:26.878 "state": "completed", 00:16:26.878 "digest": "sha512", 00:16:26.878 "dhgroup": "ffdhe8192" 00:16:26.878 } 00:16:26.878 } 00:16:26.878 ]' 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.878 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.879 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:26.879 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.879 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.879 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.879 10:05:11 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.137 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:01:MzhiNzJjYjVhNzBlYTRkZGM1NTNhN2IwZWM0NTgzY2VwdbU/: --dhchap-ctrl-secret DHHC-1:02:MzA3YjdiYzE3YTgzY2U3ZTZkMzNmZmU5ZTRiM2JjM2JmYmFjODNhMTg5MDBlYTRh/+y2Sw==: 00:16:27.702 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.960 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.960 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.960 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.960 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.961 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.961 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:27.961 10:05:12 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.219 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:28.476 00:16:28.476 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:28.476 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:28.477 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:28.734 { 00:16:28.734 "cntlid": 141, 00:16:28.734 "qid": 0, 00:16:28.734 "state": "enabled", 00:16:28.734 "thread": "nvmf_tgt_poll_group_000", 00:16:28.734 "listen_address": { 00:16:28.734 "trtype": "RDMA", 00:16:28.734 "adrfam": "IPv4", 00:16:28.734 "traddr": "192.168.100.8", 00:16:28.734 "trsvcid": "4420" 00:16:28.734 }, 00:16:28.734 "peer_address": { 00:16:28.734 "trtype": "RDMA", 00:16:28.734 "adrfam": "IPv4", 00:16:28.734 "traddr": "192.168.100.8", 00:16:28.734 "trsvcid": "52472" 00:16:28.734 }, 00:16:28.734 "auth": { 00:16:28.734 "state": "completed", 00:16:28.734 "digest": "sha512", 00:16:28.734 "dhgroup": "ffdhe8192" 00:16:28.734 } 00:16:28.734 } 00:16:28.734 ]' 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:28.734 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.993 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.993 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.993 10:05:13 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.993 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:02:ZWNhOGE0ZTViNGMwOTQyMzZjNDgxMTc3Y2ViZGQ0ZmQwMGU3NjZiY2Y1MWQ1OTYwLjtoPQ==: --dhchap-ctrl-secret DHHC-1:01:MzY1OGU4NTk3OWVmYjY1NzU2OTE5ZWU3ZjdlYmM3OTQyRYyQ: 00:16:29.562 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:29.826 10:05:14 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.100 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:30.364 00:16:30.364 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.364 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.364 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.623 { 00:16:30.623 "cntlid": 143, 00:16:30.623 "qid": 0, 00:16:30.623 "state": "enabled", 00:16:30.623 "thread": "nvmf_tgt_poll_group_000", 00:16:30.623 "listen_address": { 00:16:30.623 "trtype": "RDMA", 00:16:30.623 "adrfam": "IPv4", 00:16:30.623 "traddr": "192.168.100.8", 00:16:30.623 "trsvcid": "4420" 00:16:30.623 }, 00:16:30.623 "peer_address": { 00:16:30.623 "trtype": "RDMA", 00:16:30.623 "adrfam": "IPv4", 00:16:30.623 "traddr": "192.168.100.8", 00:16:30.623 "trsvcid": "59793" 00:16:30.623 }, 00:16:30.623 "auth": { 00:16:30.623 "state": "completed", 00:16:30.623 "digest": "sha512", 00:16:30.623 "dhgroup": "ffdhe8192" 00:16:30.623 } 00:16:30.623 } 00:16:30.623 ]' 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:30.623 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.881 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.881 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.881 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.882 10:05:15 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:16:31.448 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.706 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.706 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:31.965 10:05:16 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.224 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.483 { 00:16:32.483 "cntlid": 145, 00:16:32.483 "qid": 0, 00:16:32.483 "state": "enabled", 00:16:32.483 "thread": "nvmf_tgt_poll_group_000", 00:16:32.483 "listen_address": { 00:16:32.483 "trtype": "RDMA", 00:16:32.483 "adrfam": "IPv4", 00:16:32.483 "traddr": "192.168.100.8", 00:16:32.483 "trsvcid": "4420" 00:16:32.483 }, 00:16:32.483 "peer_address": { 00:16:32.483 "trtype": "RDMA", 00:16:32.483 "adrfam": "IPv4", 00:16:32.483 "traddr": "192.168.100.8", 00:16:32.483 "trsvcid": "52558" 00:16:32.483 }, 00:16:32.483 "auth": { 00:16:32.483 "state": "completed", 00:16:32.483 "digest": "sha512", 00:16:32.483 "dhgroup": "ffdhe8192" 00:16:32.483 } 00:16:32.483 } 00:16:32.483 ]' 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:32.483 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.742 10:05:17 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:00:NzI1YzgzMTk3YzJlMWU0YjlhZmI4ZjI3ZGUxNjdlYjY3ZWFmNThlMTM1YTMzMTIxKTDIww==: --dhchap-ctrl-secret DHHC-1:03:YTBmZDVmN2I0NWQ2ZjIyODNmMGU2YTZiYzZhZTZjNDJkYWY5ZWM0NGFjMTgyZDkxYzhhZWZkZDdkMzA5ZDJjOMDLTxM=: 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:33.680 10:05:18 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:05.762 request: 00:17:05.762 { 00:17:05.762 "name": "nvme0", 00:17:05.762 "trtype": "rdma", 00:17:05.762 "traddr": "192.168.100.8", 00:17:05.762 "adrfam": "ipv4", 00:17:05.762 "trsvcid": "4420", 00:17:05.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.762 "prchk_reftag": false, 00:17:05.762 "prchk_guard": false, 00:17:05.762 "hdgst": false, 00:17:05.762 "ddgst": false, 00:17:05.762 "dhchap_key": "key2", 00:17:05.762 "method": "bdev_nvme_attach_controller", 00:17:05.762 "req_id": 1 00:17:05.762 } 00:17:05.762 Got JSON-RPC error response 00:17:05.762 response: 00:17:05.762 { 00:17:05.762 "code": -5, 00:17:05.762 "message": "Input/output error" 00:17:05.762 } 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:05.762 request: 00:17:05.762 { 00:17:05.762 "name": "nvme0", 00:17:05.762 "trtype": "rdma", 00:17:05.762 "traddr": "192.168.100.8", 00:17:05.762 "adrfam": "ipv4", 00:17:05.762 "trsvcid": "4420", 00:17:05.762 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:05.762 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.762 "prchk_reftag": false, 00:17:05.762 "prchk_guard": false, 00:17:05.762 "hdgst": false, 00:17:05.762 "ddgst": false, 00:17:05.762 "dhchap_key": "key1", 00:17:05.762 "dhchap_ctrlr_key": "ckey2", 00:17:05.762 "method": "bdev_nvme_attach_controller", 00:17:05.762 "req_id": 1 00:17:05.762 } 00:17:05.762 Got JSON-RPC error response 00:17:05.762 response: 00:17:05.762 { 00:17:05.762 "code": -5, 00:17:05.762 "message": "Input/output error" 00:17:05.762 } 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.762 10:05:49 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.892 request: 00:17:37.892 { 00:17:37.892 "name": "nvme0", 00:17:37.892 "trtype": "rdma", 00:17:37.892 "traddr": "192.168.100.8", 00:17:37.892 "adrfam": "ipv4", 00:17:37.892 "trsvcid": "4420", 00:17:37.892 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:37.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:37.892 "prchk_reftag": false, 00:17:37.892 "prchk_guard": false, 00:17:37.892 "hdgst": false, 00:17:37.892 "ddgst": false, 00:17:37.892 "dhchap_key": "key1", 00:17:37.892 "dhchap_ctrlr_key": "ckey1", 00:17:37.892 "method": "bdev_nvme_attach_controller", 00:17:37.892 "req_id": 1 00:17:37.892 } 00:17:37.892 Got JSON-RPC error response 00:17:37.892 response: 00:17:37.892 { 00:17:37.892 "code": -5, 00:17:37.892 "message": "Input/output error" 00:17:37.892 } 00:17:37.892 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:37.892 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:37.892 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2534432 ']' 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534432' 00:17:37.893 killing process with pid 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2534432 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2567682 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2567682 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2567682 ']' 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.893 10:06:20 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2567682 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2567682 ']' 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.893 10:06:21 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:37.893 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.893 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.893 { 00:17:37.893 "cntlid": 1, 00:17:37.893 "qid": 0, 00:17:37.893 "state": "enabled", 00:17:37.893 "thread": "nvmf_tgt_poll_group_000", 00:17:37.893 "listen_address": { 00:17:37.893 "trtype": "RDMA", 00:17:37.893 "adrfam": "IPv4", 00:17:37.893 "traddr": "192.168.100.8", 00:17:37.893 "trsvcid": "4420" 00:17:37.893 }, 00:17:37.893 "peer_address": { 00:17:37.893 "trtype": "RDMA", 00:17:37.893 "adrfam": "IPv4", 00:17:37.893 "traddr": "192.168.100.8", 00:17:37.893 "trsvcid": "38264" 00:17:37.893 }, 00:17:37.893 "auth": { 00:17:37.893 "state": "completed", 00:17:37.893 "digest": "sha512", 00:17:37.893 "dhgroup": "ffdhe8192" 00:17:37.893 } 00:17:37.893 } 00:17:37.893 ]' 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.894 10:06:22 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t rdma -a 192.168.100.8 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-secret DHHC-1:03:NzU5OTM0OGE2ZDc3ZTM4ODRmZjZhOWI3MDQ5ZTFiZWQ0OTZiZWI5YjA0MDkzYzhiYTZmYjE3Y2FlZTQyNDgzZCqVHiQ=: 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.460 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:38.460 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.719 10:06:23 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.797 request: 00:18:10.797 { 00:18:10.797 "name": "nvme0", 00:18:10.797 "trtype": "rdma", 00:18:10.797 "traddr": "192.168.100.8", 00:18:10.797 "adrfam": "ipv4", 00:18:10.797 "trsvcid": "4420", 00:18:10.797 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:10.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:10.797 "prchk_reftag": false, 00:18:10.797 "prchk_guard": false, 00:18:10.797 "hdgst": false, 00:18:10.797 "ddgst": false, 00:18:10.797 "dhchap_key": "key3", 00:18:10.797 "method": "bdev_nvme_attach_controller", 00:18:10.797 "req_id": 1 00:18:10.797 } 00:18:10.797 Got JSON-RPC error response 00:18:10.797 response: 00:18:10.797 { 00:18:10.797 "code": -5, 00:18:10.797 "message": "Input/output error" 00:18:10.797 } 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:10.797 10:06:53 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:10.797 10:06:54 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.870 request: 00:18:42.870 { 00:18:42.870 "name": "nvme0", 00:18:42.870 "trtype": "rdma", 00:18:42.870 "traddr": "192.168.100.8", 00:18:42.870 "adrfam": "ipv4", 00:18:42.870 "trsvcid": "4420", 00:18:42.870 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.870 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:42.870 "prchk_reftag": false, 00:18:42.870 "prchk_guard": false, 00:18:42.870 "hdgst": false, 00:18:42.870 "ddgst": false, 00:18:42.870 "dhchap_key": "key3", 00:18:42.870 "method": "bdev_nvme_attach_controller", 00:18:42.870 "req_id": 1 00:18:42.870 } 00:18:42.870 Got JSON-RPC error response 00:18:42.870 response: 00:18:42.870 { 00:18:42.870 "code": -5, 00:18:42.870 "message": "Input/output error" 00:18:42.870 } 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.870 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:42.871 request: 00:18:42.871 { 00:18:42.871 "name": "nvme0", 00:18:42.871 "trtype": "rdma", 00:18:42.871 "traddr": "192.168.100.8", 00:18:42.871 "adrfam": "ipv4", 00:18:42.871 "trsvcid": "4420", 00:18:42.871 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:42.871 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:42.871 "prchk_reftag": false, 00:18:42.871 "prchk_guard": false, 00:18:42.871 "hdgst": false, 00:18:42.871 "ddgst": false, 00:18:42.871 "dhchap_key": "key0", 00:18:42.871 "dhchap_ctrlr_key": "key1", 00:18:42.871 "method": "bdev_nvme_attach_controller", 00:18:42.871 "req_id": 1 00:18:42.871 } 00:18:42.871 Got JSON-RPC error response 00:18:42.871 response: 00:18:42.871 { 00:18:42.871 "code": -5, 00:18:42.871 "message": "Input/output error" 00:18:42.871 } 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:42.871 10:07:24 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:42.871 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2534805 ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2534805' 00:18:42.871 killing process with pid 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2534805 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:18:42.871 rmmod nvme_rdma 00:18:42.871 rmmod nvme_fabrics 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2567682 ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2567682 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2567682 ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2567682 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2567682 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2567682' 00:18:42.871 killing process with pid 2567682 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2567682 00:18:42.871 10:07:25 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2567682 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.QjR /tmp/spdk.key-sha256.A9o /tmp/spdk.key-sha384.bH4 /tmp/spdk.key-sha512.bIE /tmp/spdk.key-sha512.yKc /tmp/spdk.key-sha384.tZs /tmp/spdk.key-sha256.u84 '' /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf-auth.log 00:18:42.871 00:18:42.871 real 4m22.987s 00:18:42.871 user 9m28.846s 00:18:42.871 sys 0m19.278s 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.871 ************************************ 00:18:42.871 END TEST nvmf_auth_target 00:18:42.871 ************************************ 00:18:42.871 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' rdma = tcp ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' rdma = tcp ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # [[ rdma == \r\d\m\a ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@61 -- # run_test nvmf_srq_overwhelm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:42.872 ************************************ 00:18:42.872 START TEST nvmf_srq_overwhelm 00:18:42.872 ************************************ 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/srq_overwhelm.sh --transport=rdma 00:18:42.872 * Looking for test storage... 00:18:42.872 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # uname -s 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@5 -- # export PATH 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@47 -- # : 0 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@13 -- # NVME_CONNECT='nvme connect -i 16' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@15 -- # nvmftestinit 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.872 10:07:26 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.064 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.064 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.064 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.064 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # e810=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # x722=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # mlx=() 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:18:47.065 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:18:47.065 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:18:47.065 Found net devices under 0000:da:00.0: mlx_0_0 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:18:47.065 Found net devices under 0000:da:00.1: mlx_0_1 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@420 -- # rdma_device_init 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # uname 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@62 -- # modprobe ib_cm 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@63 -- # modprobe ib_core 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@64 -- # modprobe ib_umad 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:18:47.065 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@66 -- # modprobe iw_cm 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@502 -- # allocate_nic_ips 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # get_rdma_if_list 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:18:47.066 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:47.066 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:18:47.066 altname enp218s0f0np0 00:18:47.066 altname ens818f0np0 00:18:47.066 inet 192.168.100.8/24 scope global mlx_0_0 00:18:47.066 valid_lft forever preferred_lft forever 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:18:47.066 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:47.066 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:18:47.066 altname enp218s0f1np1 00:18:47.066 altname ens818f1np1 00:18:47.066 inet 192.168.100.9/24 scope global mlx_0_1 00:18:47.066 valid_lft forever preferred_lft forever 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@422 -- # return 0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # get_rdma_if_list 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@104 -- # echo mlx_0_1 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@105 -- # continue 2 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:18:47.066 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # awk '{print $4}' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@113 -- # cut -d/ -f1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:18:47.067 192.168.100.9' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:18:47.067 192.168.100.9' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # head -n 1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:18:47.067 192.168.100.9' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # tail -n +2 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # head -n 1 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:18:47.067 10:07:31 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@17 -- # nvmfappstart -m 0xF 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@481 -- # nvmfpid=2581473 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@482 -- # waitforlisten 2581473 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@831 -- # '[' -z 2581473 ']' 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:47.067 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:47.067 [2024-07-25 10:07:32.063578] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:18:47.067 [2024-07-25 10:07:32.063624] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.067 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.067 [2024-07-25 10:07:32.131007] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.067 [2024-07-25 10:07:32.215147] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.067 [2024-07-25 10:07:32.215183] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.067 [2024-07-25 10:07:32.215190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.067 [2024-07-25 10:07:32.215195] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.067 [2024-07-25 10:07:32.215200] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.067 [2024-07-25 10:07:32.215280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:47.067 [2024-07-25 10:07:32.215410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.067 [2024-07-25 10:07:32.215513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.067 [2024-07-25 10:07:32.215514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@864 -- # return 0 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -s 1024 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 [2024-07-25 10:07:32.938531] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x20e6cc0/0x20eb1b0) succeed. 00:18:48.003 [2024-07-25 10:07:32.947627] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x20e8300/0x212c840) succeed. 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # seq 0 5 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000000 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.003 10:07:32 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 Malloc0 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.003 [2024-07-25 10:07:33.037925] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.003 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode0 -a 192.168.100.8 -s 4420 00:18:48.937 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme0n1 00:18:48.937 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:48.937 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:48.937 10:07:33 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 Malloc1 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.937 10:07:34 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme1n1 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme1n1 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme1n1 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 Malloc2 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.313 10:07:35 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme2n1 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme2n1 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme2n1 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 Malloc3 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.246 10:07:36 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme3n1 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme3n1 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme3n1 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:52.220 Malloc4 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.220 10:07:37 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme4n1 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme4n1 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme4n1 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@22 -- # for i in $(seq 0 5) 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK00000000000005 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 Malloc5 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.155 10:07:38 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@27 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@28 -- # waitforblk nvme5n1 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1235 -- # local i=0 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1236 -- # grep -q -w nvme5n1 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1242 -- # grep -q -w nvme5n1 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1246 -- # return 0 00:18:54.089 10:07:39 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 1048576 -d 128 -t read -r 10 -n 13 00:18:54.089 [global] 00:18:54.089 thread=1 00:18:54.089 invalidate=1 00:18:54.089 rw=read 00:18:54.089 time_based=1 00:18:54.089 runtime=10 00:18:54.089 ioengine=libaio 00:18:54.089 direct=1 00:18:54.089 bs=1048576 00:18:54.089 iodepth=128 00:18:54.089 norandommap=1 00:18:54.089 numjobs=13 00:18:54.089 00:18:54.089 [job0] 00:18:54.089 filename=/dev/nvme0n1 00:18:54.089 [job1] 00:18:54.089 filename=/dev/nvme1n1 00:18:54.089 [job2] 00:18:54.089 filename=/dev/nvme2n1 00:18:54.089 [job3] 00:18:54.089 filename=/dev/nvme3n1 00:18:54.089 [job4] 00:18:54.089 filename=/dev/nvme4n1 00:18:54.089 [job5] 00:18:54.089 filename=/dev/nvme5n1 00:18:54.363 Could not set queue depth (nvme0n1) 00:18:54.363 Could not set queue depth (nvme1n1) 00:18:54.363 Could not set queue depth (nvme2n1) 00:18:54.363 Could not set queue depth (nvme3n1) 00:18:54.363 Could not set queue depth (nvme4n1) 00:18:54.363 Could not set queue depth (nvme5n1) 00:18:54.630 job0: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 job1: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 job2: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 job3: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 job4: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 job5: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=libaio, iodepth=128 00:18:54.630 ... 00:18:54.630 fio-3.35 00:18:54.630 Starting 78 threads 00:19:09.510 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582883: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=109, BW=109MiB/s (115MB/s)(1540MiB/14080msec) 00:19:09.510 slat (usec): min=45, max=2098.8k, avg=7735.32, stdev=75597.67 00:19:09.510 clat (msec): min=434, max=6916, avg=1130.88, stdev=1659.54 00:19:09.510 lat (msec): min=446, max=6920, avg=1138.62, stdev=1664.54 00:19:09.510 clat percentiles (msec): 00:19:09.510 | 1.00th=[ 477], 5.00th=[ 493], 10.00th=[ 527], 20.00th=[ 535], 00:19:09.510 | 30.00th=[ 558], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 676], 00:19:09.510 | 70.00th=[ 693], 80.00th=[ 709], 90.00th=[ 885], 95.00th=[ 6611], 00:19:09.510 | 99.00th=[ 6879], 99.50th=[ 6879], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:09.510 | 99.99th=[ 6946] 00:19:09.510 bw ( KiB/s): min= 2052, max=260096, per=6.75%, avg=170173.88, stdev=83228.22, samples=17 00:19:09.510 iops : min= 2, max= 254, avg=166.12, stdev=81.24, samples=17 00:19:09.510 lat (msec) : 500=6.04%, 750=78.96%, 1000=6.56%, >=2000=8.44% 00:19:09.510 cpu : usr=0.11%, sys=1.98%, ctx=1357, majf=0, minf=32769 00:19:09.510 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:19:09.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.510 issued rwts: total=1540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582884: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=4, BW=4491KiB/s (4599kB/s)(57.0MiB/12997msec) 00:19:09.510 slat (usec): min=557, max=2124.8k, avg=190730.45, stdev=594914.29 00:19:09.510 clat (msec): min=2124, max=12995, avg=11884.65, stdev=2376.33 00:19:09.510 lat (msec): min=4249, max=12996, avg=12075.38, stdev=1982.62 00:19:09.510 clat percentiles (msec): 00:19:09.510 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 8557], 20.00th=[12684], 00:19:09.510 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:19:09.510 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.510 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.510 | 99.99th=[12953] 00:19:09.510 lat (msec) : >=2000=100.00% 00:19:09.510 cpu : usr=0.00%, sys=0.35%, ctx=62, majf=0, minf=14593 00:19:09.510 IO depths : 1=1.8%, 2=3.5%, 4=7.0%, 8=14.0%, 16=28.1%, 32=45.6%, >=64=0.0% 00:19:09.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.510 issued rwts: total=57,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582885: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=10, BW=10.0MiB/s (10.5MB/s)(130MiB/12982msec) 00:19:09.510 slat (usec): min=696, max=2140.5k, avg=83517.15, stdev=382632.94 00:19:09.510 clat (msec): min=2124, max=12939, avg=11959.92, stdev=1883.19 00:19:09.510 lat (msec): min=4248, max=12948, avg=12043.44, stdev=1671.93 00:19:09.510 clat percentiles (msec): 00:19:09.510 | 1.00th=[ 4245], 5.00th=[ 6409], 10.00th=[10805], 20.00th=[12147], 00:19:09.510 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:19:09.510 | 70.00th=[12684], 80.00th=[12684], 90.00th=[12818], 95.00th=[12953], 00:19:09.510 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.510 | 99.99th=[12953] 00:19:09.510 bw ( KiB/s): min= 2048, max= 4096, per=0.12%, avg=3072.00, stdev=1448.15, samples=2 00:19:09.510 iops : min= 2, max= 4, avg= 3.00, stdev= 1.41, samples=2 00:19:09.510 lat (msec) : >=2000=100.00% 00:19:09.510 cpu : usr=0.01%, sys=0.73%, ctx=133, majf=0, minf=32769 00:19:09.510 IO depths : 1=0.8%, 2=1.5%, 4=3.1%, 8=6.2%, 16=12.3%, 32=24.6%, >=64=51.5% 00:19:09.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 complete : 0=0.0%, 4=75.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=25.0% 00:19:09.510 issued rwts: total=130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582886: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=4, BW=4712KiB/s (4825kB/s)(60.0MiB/13040msec) 00:19:09.510 slat (usec): min=673, max=2122.1k, avg=181913.31, stdev=585469.28 00:19:09.510 clat (msec): min=2124, max=13038, avg=11497.56, stdev=3076.09 00:19:09.510 lat (msec): min=4246, max=13039, avg=11679.47, stdev=2824.89 00:19:09.510 clat percentiles (msec): 00:19:09.510 | 1.00th=[ 2123], 5.00th=[ 4279], 10.00th=[ 4329], 20.00th=[ 8557], 00:19:09.510 | 30.00th=[12953], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:19:09.510 | 70.00th=[12953], 80.00th=[13087], 90.00th=[13087], 95.00th=[13087], 00:19:09.510 | 99.00th=[13087], 99.50th=[13087], 99.90th=[13087], 99.95th=[13087], 00:19:09.510 | 99.99th=[13087] 00:19:09.510 lat (msec) : >=2000=100.00% 00:19:09.510 cpu : usr=0.00%, sys=0.39%, ctx=100, majf=0, minf=15361 00:19:09.510 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:09.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.510 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582887: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=1, BW=1422KiB/s (1457kB/s)(15.0MiB/10798msec) 00:19:09.510 slat (msec): min=8, max=2191, avg=716.92, stdev=1024.07 00:19:09.510 clat (msec): min=43, max=10784, avg=6435.36, stdev=3984.92 00:19:09.510 lat (msec): min=2094, max=10797, avg=7152.29, stdev=3710.81 00:19:09.510 clat percentiles (msec): 00:19:09.510 | 1.00th=[ 44], 5.00th=[ 44], 10.00th=[ 2089], 20.00th=[ 2123], 00:19:09.510 | 30.00th=[ 2165], 40.00th=[ 4245], 50.00th=[ 6409], 60.00th=[ 8557], 00:19:09.510 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:09.510 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:09.510 | 99.99th=[10805] 00:19:09.510 lat (msec) : 50=6.67%, >=2000=93.33% 00:19:09.510 cpu : usr=0.00%, sys=0.11%, ctx=67, majf=0, minf=3841 00:19:09.510 IO depths : 1=6.7%, 2=13.3%, 4=26.7%, 8=53.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.510 issued rwts: total=15,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.510 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.510 job0: (groupid=0, jobs=1): err= 0: pid=2582888: Thu Jul 25 10:07:53 2024 00:19:09.510 read: IOPS=1, BW=1900KiB/s (1946kB/s)(20.0MiB/10778msec) 00:19:09.510 slat (msec): min=7, max=2123, avg=536.28, stdev=921.17 00:19:09.510 clat (msec): min=51, max=10765, avg=5774.39, stdev=3560.25 00:19:09.511 lat (msec): min=2085, max=10777, avg=6310.66, stdev=3459.21 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 52], 5.00th=[ 52], 10.00th=[ 2089], 20.00th=[ 2123], 00:19:09.511 | 30.00th=[ 2165], 40.00th=[ 4245], 50.00th=[ 4279], 60.00th=[ 6409], 00:19:09.511 | 70.00th=[ 6409], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:19:09.511 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:09.511 | 99.99th=[10805] 00:19:09.511 lat (msec) : 100=5.00%, >=2000=95.00% 00:19:09.511 cpu : usr=0.00%, sys=0.14%, ctx=60, majf=0, minf=5121 00:19:09.511 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.511 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582889: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=50, BW=50.3MiB/s (52.8MB/s)(706MiB/14028msec) 00:19:09.511 slat (usec): min=74, max=2077.0k, avg=16802.97, stdev=134630.42 00:19:09.511 clat (msec): min=587, max=9076, avg=2407.90, stdev=2916.41 00:19:09.511 lat (msec): min=597, max=9077, avg=2424.70, stdev=2924.00 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 609], 5.00th=[ 667], 10.00th=[ 701], 20.00th=[ 835], 00:19:09.511 | 30.00th=[ 852], 40.00th=[ 961], 50.00th=[ 1150], 60.00th=[ 1217], 00:19:09.511 | 70.00th=[ 1351], 80.00th=[ 1401], 90.00th=[ 8792], 95.00th=[ 8926], 00:19:09.511 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:09.511 | 99.99th=[ 9060] 00:19:09.511 bw ( KiB/s): min= 2052, max=198656, per=3.62%, avg=91199.85, stdev=69881.02, samples=13 00:19:09.511 iops : min= 2, max= 194, avg=88.85, stdev=68.46, samples=13 00:19:09.511 lat (msec) : 750=14.31%, 1000=26.91%, 2000=38.95%, >=2000=19.83% 00:19:09.511 cpu : usr=0.03%, sys=0.95%, ctx=1223, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.3%, 32=4.5%, >=64=91.1% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.511 issued rwts: total=706,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582890: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=6, BW=7167KiB/s (7339kB/s)(76.0MiB/10859msec) 00:19:09.511 slat (usec): min=723, max=2124.3k, avg=142165.84, stdev=501710.91 00:19:09.511 clat (msec): min=53, max=10857, avg=7231.08, stdev=2624.76 00:19:09.511 lat (msec): min=2093, max=10858, avg=7373.24, stdev=2521.39 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 54], 5.00th=[ 2123], 10.00th=[ 4279], 20.00th=[ 6074], 00:19:09.511 | 30.00th=[ 6141], 40.00th=[ 6208], 50.00th=[ 6275], 60.00th=[ 6342], 00:19:09.511 | 70.00th=[ 8557], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:09.511 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:09.511 | 99.99th=[10805] 00:19:09.511 lat (msec) : 100=1.32%, >=2000=98.68% 00:19:09.511 cpu : usr=0.01%, sys=0.52%, ctx=167, majf=0, minf=19457 00:19:09.511 IO depths : 1=1.3%, 2=2.6%, 4=5.3%, 8=10.5%, 16=21.1%, 32=42.1%, >=64=17.1% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:09.511 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582891: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=47, BW=47.1MiB/s (49.4MB/s)(665MiB/14109msec) 00:19:09.511 slat (usec): min=46, max=5357.4k, avg=17974.51, stdev=217341.72 00:19:09.511 clat (msec): min=110, max=11971, avg=2602.28, stdev=3963.35 00:19:09.511 lat (msec): min=110, max=11977, avg=2620.26, stdev=3978.84 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 114], 5.00th=[ 128], 10.00th=[ 129], 20.00th=[ 130], 00:19:09.511 | 30.00th=[ 197], 40.00th=[ 321], 50.00th=[ 600], 60.00th=[ 894], 00:19:09.511 | 70.00th=[ 1921], 80.00th=[ 2836], 90.00th=[10402], 95.00th=[11342], 00:19:09.511 | 99.00th=[11879], 99.50th=[11879], 99.90th=[12013], 99.95th=[12013], 00:19:09.511 | 99.99th=[12013] 00:19:09.511 bw ( KiB/s): min= 2048, max=497664, per=3.97%, avg=100166.18, stdev=145198.32, samples=11 00:19:09.511 iops : min= 2, max= 486, avg=97.82, stdev=141.80, samples=11 00:19:09.511 lat (msec) : 250=33.83%, 500=12.18%, 750=8.87%, 1000=8.42%, 2000=7.22% 00:19:09.511 lat (msec) : >=2000=29.47% 00:19:09.511 cpu : usr=0.00%, sys=0.92%, ctx=1248, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.4%, 32=4.8%, >=64=90.5% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.511 issued rwts: total=665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582892: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=22, BW=22.4MiB/s (23.5MB/s)(292MiB/13039msec) 00:19:09.511 slat (usec): min=426, max=2134.5k, avg=37383.97, stdev=241167.30 00:19:09.511 clat (msec): min=655, max=11914, avg=5510.79, stdev=4967.86 00:19:09.511 lat (msec): min=655, max=11920, avg=5548.17, stdev=4974.85 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 676], 5.00th=[ 726], 10.00th=[ 768], 20.00th=[ 1011], 00:19:09.511 | 30.00th=[ 1183], 40.00th=[ 1385], 50.00th=[ 1418], 60.00th=[10537], 00:19:09.511 | 70.00th=[10939], 80.00th=[11342], 90.00th=[11476], 95.00th=[11610], 00:19:09.511 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:09.511 | 99.99th=[11879] 00:19:09.511 bw ( KiB/s): min= 2048, max=112640, per=1.49%, avg=37546.67, stdev=43396.34, samples=9 00:19:09.511 iops : min= 2, max= 110, avg=36.67, stdev=42.38, samples=9 00:19:09.511 lat (msec) : 750=7.53%, 1000=11.64%, 2000=35.62%, >=2000=45.21% 00:19:09.511 cpu : usr=0.00%, sys=0.68%, ctx=783, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.3%, 2=0.7%, 4=1.4%, 8=2.7%, 16=5.5%, 32=11.0%, >=64=78.4% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:09.511 issued rwts: total=292,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582893: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=51, BW=51.1MiB/s (53.5MB/s)(718MiB/14060msec) 00:19:09.511 slat (usec): min=45, max=4280.5k, avg=16590.47, stdev=213814.48 00:19:09.511 clat (msec): min=131, max=12214, avg=2428.41, stdev=4413.67 00:19:09.511 lat (msec): min=133, max=12216, avg=2445.00, stdev=4427.61 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 133], 5.00th=[ 133], 10.00th=[ 134], 20.00th=[ 136], 00:19:09.511 | 30.00th=[ 190], 40.00th=[ 388], 50.00th=[ 397], 60.00th=[ 435], 00:19:09.511 | 70.00th=[ 709], 80.00th=[ 827], 90.00th=[12013], 95.00th=[12147], 00:19:09.511 | 99.00th=[12147], 99.50th=[12147], 99.90th=[12281], 99.95th=[12281], 00:19:09.511 | 99.99th=[12281] 00:19:09.511 bw ( KiB/s): min= 2048, max=593920, per=6.00%, avg=151295.38, stdev=212669.91, samples=8 00:19:09.511 iops : min= 2, max= 580, avg=147.62, stdev=207.78, samples=8 00:19:09.511 lat (msec) : 250=34.82%, 500=28.97%, 750=9.61%, 1000=8.08%, >=2000=18.52% 00:19:09.511 cpu : usr=0.01%, sys=0.71%, ctx=911, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.1%, 16=2.2%, 32=4.5%, >=64=91.2% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.511 issued rwts: total=718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582894: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=25, BW=25.4MiB/s (26.7MB/s)(356MiB/13996msec) 00:19:09.511 slat (usec): min=42, max=2104.1k, avg=33281.11, stdev=242647.98 00:19:09.511 clat (msec): min=368, max=11750, avg=3163.95, stdev=3136.92 00:19:09.511 lat (msec): min=371, max=11763, avg=3197.23, stdev=3176.27 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 372], 5.00th=[ 372], 10.00th=[ 372], 20.00th=[ 376], 00:19:09.511 | 30.00th=[ 388], 40.00th=[ 409], 50.00th=[ 498], 60.00th=[ 4463], 00:19:09.511 | 70.00th=[ 4597], 80.00th=[ 4665], 90.00th=[ 8792], 95.00th=[ 8926], 00:19:09.511 | 99.00th=[ 9597], 99.50th=[ 9597], 99.90th=[11745], 99.95th=[11745], 00:19:09.511 | 99.99th=[11745] 00:19:09.511 bw ( KiB/s): min= 2052, max=282624, per=4.65%, avg=117249.00, stdev=118149.84, samples=4 00:19:09.511 iops : min= 2, max= 276, avg=114.50, stdev=115.38, samples=4 00:19:09.511 lat (msec) : 500=50.00%, >=2000=50.00% 00:19:09.511 cpu : usr=0.00%, sys=0.67%, ctx=298, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.2%, 16=4.5%, 32=9.0%, >=64=82.3% 00:19:09.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.511 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:09.511 issued rwts: total=356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.511 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.511 job0: (groupid=0, jobs=1): err= 0: pid=2582895: Thu Jul 25 10:07:53 2024 00:19:09.511 read: IOPS=15, BW=15.4MiB/s (16.2MB/s)(216MiB/13982msec) 00:19:09.511 slat (usec): min=54, max=2093.6k, avg=54724.78, stdev=311064.83 00:19:09.511 clat (msec): min=840, max=11386, avg=7010.07, stdev=4422.62 00:19:09.511 lat (msec): min=845, max=11390, avg=7064.79, stdev=4406.41 00:19:09.511 clat percentiles (msec): 00:19:09.511 | 1.00th=[ 844], 5.00th=[ 852], 10.00th=[ 852], 20.00th=[ 894], 00:19:09.511 | 30.00th=[ 2903], 40.00th=[ 5067], 50.00th=[10671], 60.00th=[10805], 00:19:09.511 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:19:09.511 | 99.00th=[11342], 99.50th=[11342], 99.90th=[11342], 99.95th=[11342], 00:19:09.511 | 99.99th=[11342] 00:19:09.511 bw ( KiB/s): min= 2052, max=104448, per=1.20%, avg=30374.50, stdev=39041.15, samples=6 00:19:09.511 iops : min= 2, max= 102, avg=29.50, stdev=38.21, samples=6 00:19:09.511 lat (msec) : 1000=23.15%, >=2000=76.85% 00:19:09.511 cpu : usr=0.00%, sys=0.74%, ctx=171, majf=0, minf=32769 00:19:09.511 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.7%, 16=7.4%, 32=14.8%, >=64=70.8% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:09.512 issued rwts: total=216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582901: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=9, BW=9911KiB/s (10.1MB/s)(125MiB/12915msec) 00:19:09.512 slat (usec): min=561, max=2186.9k, avg=86353.79, stdev=388059.83 00:19:09.512 clat (msec): min=2119, max=12911, avg=10707.32, stdev=1582.99 00:19:09.512 lat (msec): min=4306, max=12914, avg=10793.68, stdev=1393.88 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 4329], 5.00th=[10000], 10.00th=[10000], 20.00th=[10134], 00:19:09.512 | 30.00th=[10268], 40.00th=[10268], 50.00th=[10402], 60.00th=[10537], 00:19:09.512 | 70.00th=[10671], 80.00th=[12818], 90.00th=[12953], 95.00th=[12953], 00:19:09.512 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.512 | 99.99th=[12953] 00:19:09.512 lat (msec) : >=2000=100.00% 00:19:09.512 cpu : usr=0.00%, sys=0.64%, ctx=179, majf=0, minf=32001 00:19:09.512 IO depths : 1=0.8%, 2=1.6%, 4=3.2%, 8=6.4%, 16=12.8%, 32=25.6%, >=64=49.6% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:09.512 issued rwts: total=125,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582902: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=112, BW=112MiB/s (118MB/s)(1448MiB/12900msec) 00:19:09.512 slat (usec): min=37, max=2116.3k, avg=7434.69, stdev=83246.02 00:19:09.512 clat (msec): min=255, max=6729, avg=1012.60, stdev=1747.89 00:19:09.512 lat (msec): min=257, max=6731, avg=1020.04, stdev=1753.73 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 257], 20.00th=[ 257], 00:19:09.512 | 30.00th=[ 342], 40.00th=[ 393], 50.00th=[ 430], 60.00th=[ 510], 00:19:09.512 | 70.00th=[ 567], 80.00th=[ 684], 90.00th=[ 1854], 95.00th=[ 6611], 00:19:09.512 | 99.00th=[ 6678], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:19:09.512 | 99.99th=[ 6745] 00:19:09.512 bw ( KiB/s): min= 2048, max=505856, per=8.94%, avg=225450.67, stdev=167026.37, samples=12 00:19:09.512 iops : min= 2, max= 494, avg=220.17, stdev=163.11, samples=12 00:19:09.512 lat (msec) : 500=57.67%, 750=23.55%, 1000=6.22%, 2000=2.62%, >=2000=9.94% 00:19:09.512 cpu : usr=0.02%, sys=0.98%, ctx=3144, majf=0, minf=32769 00:19:09.512 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.512 issued rwts: total=1448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582903: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=92, BW=92.5MiB/s (97.0MB/s)(1196MiB/12933msec) 00:19:09.512 slat (usec): min=41, max=2106.8k, avg=9040.61, stdev=90675.46 00:19:09.512 clat (msec): min=255, max=6733, avg=1201.90, stdev=1885.06 00:19:09.512 lat (msec): min=257, max=6736, avg=1210.94, stdev=1890.90 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 255], 5.00th=[ 257], 10.00th=[ 257], 20.00th=[ 257], 00:19:09.512 | 30.00th=[ 266], 40.00th=[ 388], 50.00th=[ 401], 60.00th=[ 718], 00:19:09.512 | 70.00th=[ 894], 80.00th=[ 1150], 90.00th=[ 6477], 95.00th=[ 6611], 00:19:09.512 | 99.00th=[ 6745], 99.50th=[ 6745], 99.90th=[ 6745], 99.95th=[ 6745], 00:19:09.512 | 99.99th=[ 6745] 00:19:09.512 bw ( KiB/s): min= 1438, max=505856, per=7.23%, avg=182391.83, stdev=178063.05, samples=12 00:19:09.512 iops : min= 1, max= 494, avg=178.08, stdev=173.93, samples=12 00:19:09.512 lat (msec) : 500=57.69%, 750=4.01%, 1000=12.63%, 2000=14.38%, >=2000=11.29% 00:19:09.512 cpu : usr=0.00%, sys=0.94%, ctx=3358, majf=0, minf=32769 00:19:09.512 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.512 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582904: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=3, BW=3936KiB/s (4030kB/s)(50.0MiB/13009msec) 00:19:09.512 slat (usec): min=802, max=2113.4k, avg=217751.05, stdev=626134.64 00:19:09.512 clat (msec): min=2120, max=13007, avg=11352.54, stdev=3030.90 00:19:09.512 lat (msec): min=4233, max=13008, avg=11570.29, stdev=2730.29 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:19:09.512 | 30.00th=[12684], 40.00th=[12818], 50.00th=[12953], 60.00th=[12953], 00:19:09.512 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.512 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.512 | 99.99th=[12953] 00:19:09.512 lat (msec) : >=2000=100.00% 00:19:09.512 cpu : usr=0.00%, sys=0.34%, ctx=84, majf=0, minf=12801 00:19:09.512 IO depths : 1=2.0%, 2=4.0%, 4=8.0%, 8=16.0%, 16=32.0%, 32=38.0%, >=64=0.0% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.512 issued rwts: total=50,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582905: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=74, BW=74.7MiB/s (78.3MB/s)(811MiB/10855msec) 00:19:09.512 slat (usec): min=373, max=2091.0k, avg=13308.32, stdev=113776.86 00:19:09.512 clat (msec): min=59, max=5711, avg=1471.14, stdev=1764.36 00:19:09.512 lat (msec): min=392, max=5714, avg=1484.45, stdev=1768.30 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 393], 5.00th=[ 393], 10.00th=[ 393], 20.00th=[ 393], 00:19:09.512 | 30.00th=[ 625], 40.00th=[ 667], 50.00th=[ 701], 60.00th=[ 818], 00:19:09.512 | 70.00th=[ 978], 80.00th=[ 1368], 90.00th=[ 5470], 95.00th=[ 5604], 00:19:09.512 | 99.00th=[ 5671], 99.50th=[ 5671], 99.90th=[ 5738], 99.95th=[ 5738], 00:19:09.512 | 99.99th=[ 5738] 00:19:09.512 bw ( KiB/s): min= 4096, max=331776, per=5.55%, avg=139853.40, stdev=128479.06, samples=10 00:19:09.512 iops : min= 4, max= 324, avg=136.50, stdev=125.48, samples=10 00:19:09.512 lat (msec) : 100=0.12%, 500=26.76%, 750=28.98%, 1000=17.51%, 2000=8.88% 00:19:09.512 lat (msec) : >=2000=17.76% 00:19:09.512 cpu : usr=0.00%, sys=1.04%, ctx=2348, majf=0, minf=32769 00:19:09.512 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=2.0%, 32=3.9%, >=64=92.2% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.512 issued rwts: total=811,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582906: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=70, BW=70.8MiB/s (74.2MB/s)(768MiB/10850msec) 00:19:09.512 slat (usec): min=51, max=2131.3k, avg=14041.39, stdev=129749.51 00:19:09.512 clat (msec): min=62, max=6928, avg=1727.10, stdev=2087.04 00:19:09.512 lat (msec): min=592, max=6930, avg=1741.14, stdev=2092.51 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 592], 5.00th=[ 609], 10.00th=[ 617], 20.00th=[ 642], 00:19:09.512 | 30.00th=[ 659], 40.00th=[ 760], 50.00th=[ 818], 60.00th=[ 877], 00:19:09.512 | 70.00th=[ 936], 80.00th=[ 1020], 90.00th=[ 6544], 95.00th=[ 6745], 00:19:09.512 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:09.512 | 99.99th=[ 6946] 00:19:09.512 bw ( KiB/s): min=18432, max=208896, per=5.20%, avg=131036.80, stdev=72479.99, samples=10 00:19:09.512 iops : min= 18, max= 204, avg=127.90, stdev=70.74, samples=10 00:19:09.512 lat (msec) : 100=0.13%, 750=39.58%, 1000=39.06%, 2000=1.69%, >=2000=19.53% 00:19:09.512 cpu : usr=0.02%, sys=1.27%, ctx=707, majf=0, minf=32769 00:19:09.512 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.0%, 16=2.1%, 32=4.2%, >=64=91.8% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.512 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582907: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=1, BW=1506KiB/s (1542kB/s)(19.0MiB/12922msec) 00:19:09.512 slat (usec): min=737, max=4236.5k, avg=568474.44, stdev=1177792.33 00:19:09.512 clat (msec): min=2120, max=12912, avg=11124.86, stdev=3223.67 00:19:09.512 lat (msec): min=4280, max=12921, avg=11693.34, stdev=2392.91 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4279], 20.00th=[10671], 00:19:09.512 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:19:09.512 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.512 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.512 | 99.99th=[12953] 00:19:09.512 lat (msec) : >=2000=100.00% 00:19:09.512 cpu : usr=0.00%, sys=0.12%, ctx=52, majf=0, minf=4865 00:19:09.512 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:19:09.512 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.512 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.512 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.512 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.512 job1: (groupid=0, jobs=1): err= 0: pid=2582908: Thu Jul 25 10:07:53 2024 00:19:09.512 read: IOPS=17, BW=17.7MiB/s (18.6MB/s)(230MiB/12998msec) 00:19:09.512 slat (usec): min=55, max=2102.4k, avg=46994.02, stdev=275774.36 00:19:09.512 clat (msec): min=881, max=11492, avg=6699.51, stdev=4541.18 00:19:09.512 lat (msec): min=894, max=11495, avg=6746.50, stdev=4535.18 00:19:09.512 clat percentiles (msec): 00:19:09.512 | 1.00th=[ 894], 5.00th=[ 927], 10.00th=[ 953], 20.00th=[ 1737], 00:19:09.512 | 30.00th=[ 1838], 40.00th=[ 3104], 50.00th=[ 8490], 60.00th=[10805], 00:19:09.512 | 70.00th=[10939], 80.00th=[11208], 90.00th=[11342], 95.00th=[11342], 00:19:09.513 | 99.00th=[11476], 99.50th=[11476], 99.90th=[11476], 99.95th=[11476], 00:19:09.513 | 99.99th=[11476] 00:19:09.513 bw ( KiB/s): min= 1932, max=92160, per=1.39%, avg=35135.83, stdev=42987.00, samples=6 00:19:09.513 iops : min= 1, max= 90, avg=34.00, stdev=42.25, samples=6 00:19:09.513 lat (msec) : 1000=15.65%, 2000=22.61%, >=2000=61.74% 00:19:09.513 cpu : usr=0.00%, sys=0.65%, ctx=258, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.5%, 16=7.0%, 32=13.9%, >=64=72.6% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.0% 00:19:09.513 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job1: (groupid=0, jobs=1): err= 0: pid=2582909: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=44, BW=44.6MiB/s (46.8MB/s)(625MiB/14003msec) 00:19:09.513 slat (usec): min=49, max=2095.5k, avg=18951.79, stdev=165182.94 00:19:09.513 clat (msec): min=345, max=13990, avg=2787.40, stdev=4081.91 00:19:09.513 lat (msec): min=346, max=13992, avg=2806.35, stdev=4094.86 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 347], 5.00th=[ 351], 10.00th=[ 351], 20.00th=[ 376], 00:19:09.513 | 30.00th=[ 443], 40.00th=[ 477], 50.00th=[ 802], 60.00th=[ 927], 00:19:09.513 | 70.00th=[ 1083], 80.00th=[ 6409], 90.00th=[10939], 95.00th=[11073], 00:19:09.513 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.513 | 99.99th=[14026] 00:19:09.513 bw ( KiB/s): min= 2052, max=309248, per=4.04%, avg=101942.70, stdev=101576.74, samples=10 00:19:09.513 iops : min= 2, max= 302, avg=99.40, stdev=99.25, samples=10 00:19:09.513 lat (msec) : 500=40.16%, 750=4.32%, 1000=19.20%, 2000=11.84%, >=2000=24.48% 00:19:09.513 cpu : usr=0.02%, sys=1.03%, ctx=570, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.1%, >=64=89.9% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.513 issued rwts: total=625,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job1: (groupid=0, jobs=1): err= 0: pid=2582910: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=3, BW=4072KiB/s (4170kB/s)(56.0MiB/14081msec) 00:19:09.513 slat (usec): min=678, max=2136.1k, avg=213006.29, stdev=608554.14 00:19:09.513 clat (msec): min=2151, max=14079, avg=10803.21, stdev=3909.67 00:19:09.513 lat (msec): min=4209, max=14080, avg=11016.22, stdev=3751.50 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:09.513 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[14026], 00:19:09.513 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.513 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.513 | 99.99th=[14026] 00:19:09.513 lat (msec) : >=2000=100.00% 00:19:09.513 cpu : usr=0.00%, sys=0.28%, ctx=87, majf=0, minf=14337 00:19:09.513 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.513 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job1: (groupid=0, jobs=1): err= 0: pid=2582911: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=72, BW=72.2MiB/s (75.7MB/s)(1017MiB/14089msec) 00:19:09.513 slat (usec): min=50, max=2075.3k, avg=11722.91, stdev=112095.39 00:19:09.513 clat (msec): min=398, max=9072, avg=1693.13, stdev=2617.85 00:19:09.513 lat (msec): min=402, max=9087, avg=1704.86, stdev=2626.33 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 409], 5.00th=[ 418], 10.00th=[ 443], 20.00th=[ 542], 00:19:09.513 | 30.00th=[ 550], 40.00th=[ 558], 50.00th=[ 609], 60.00th=[ 760], 00:19:09.513 | 70.00th=[ 860], 80.00th=[ 1099], 90.00th=[ 8658], 95.00th=[ 8792], 00:19:09.513 | 99.00th=[ 9060], 99.50th=[ 9060], 99.90th=[ 9060], 99.95th=[ 9060], 00:19:09.513 | 99.99th=[ 9060] 00:19:09.513 bw ( KiB/s): min= 2052, max=299008, per=5.56%, avg=140244.85, stdev=108693.38, samples=13 00:19:09.513 iops : min= 2, max= 292, avg=136.85, stdev=106.20, samples=13 00:19:09.513 lat (msec) : 500=12.39%, 750=47.20%, 1000=19.37%, 2000=7.08%, >=2000=13.96% 00:19:09.513 cpu : usr=0.04%, sys=1.44%, ctx=1009, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.513 issued rwts: total=1017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job1: (groupid=0, jobs=1): err= 0: pid=2582912: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=3, BW=3683KiB/s (3772kB/s)(39.0MiB/10842msec) 00:19:09.513 slat (usec): min=590, max=3159.8k, avg=276383.79, stdev=742242.03 00:19:09.513 clat (msec): min=61, max=10831, avg=7273.95, stdev=3280.93 00:19:09.513 lat (msec): min=2145, max=10840, avg=7550.33, stdev=3106.79 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 63], 5.00th=[ 2140], 10.00th=[ 2165], 20.00th=[ 4245], 00:19:09.513 | 30.00th=[ 4279], 40.00th=[ 6409], 50.00th=[ 6409], 60.00th=[ 9597], 00:19:09.513 | 70.00th=[10671], 80.00th=[10671], 90.00th=[10805], 95.00th=[10805], 00:19:09.513 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:09.513 | 99.99th=[10805] 00:19:09.513 lat (msec) : 100=2.56%, >=2000=97.44% 00:19:09.513 cpu : usr=0.00%, sys=0.26%, ctx=73, majf=0, minf=9985 00:19:09.513 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.513 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job1: (groupid=0, jobs=1): err= 0: pid=2582913: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=99, BW=99.9MiB/s (105MB/s)(1077MiB/10786msec) 00:19:09.513 slat (usec): min=71, max=2055.7k, avg=9948.23, stdev=88030.28 00:19:09.513 clat (msec): min=58, max=4764, avg=1165.09, stdev=1267.23 00:19:09.513 lat (msec): min=399, max=4785, avg=1175.04, stdev=1270.93 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 401], 5.00th=[ 401], 10.00th=[ 405], 20.00th=[ 418], 00:19:09.513 | 30.00th=[ 498], 40.00th=[ 523], 50.00th=[ 575], 60.00th=[ 634], 00:19:09.513 | 70.00th=[ 760], 80.00th=[ 1871], 90.00th=[ 4329], 95.00th=[ 4530], 00:19:09.513 | 99.00th=[ 4732], 99.50th=[ 4732], 99.90th=[ 4732], 99.95th=[ 4732], 00:19:09.513 | 99.99th=[ 4732] 00:19:09.513 bw ( KiB/s): min= 6144, max=319488, per=6.42%, avg=161973.25, stdev=116208.86, samples=12 00:19:09.513 iops : min= 6, max= 312, avg=158.17, stdev=113.49, samples=12 00:19:09.513 lat (msec) : 100=0.09%, 500=30.55%, 750=39.18%, 1000=2.41%, 2000=14.67% 00:19:09.513 lat (msec) : >=2000=13.09% 00:19:09.513 cpu : usr=0.14%, sys=1.91%, ctx=1188, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=3.0%, >=64=94.2% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.513 issued rwts: total=1077,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job2: (groupid=0, jobs=1): err= 0: pid=2582918: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=117, BW=118MiB/s (123MB/s)(1646MiB/13987msec) 00:19:09.513 slat (usec): min=39, max=2107.3k, avg=7187.28, stdev=88502.33 00:19:09.513 clat (msec): min=241, max=6671, avg=908.78, stdev=1632.54 00:19:09.513 lat (msec): min=242, max=6672, avg=915.97, stdev=1638.52 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 243], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 257], 00:19:09.513 | 30.00th=[ 266], 40.00th=[ 268], 50.00th=[ 279], 60.00th=[ 535], 00:19:09.513 | 70.00th=[ 558], 80.00th=[ 634], 90.00th=[ 911], 95.00th=[ 6477], 00:19:09.513 | 99.00th=[ 6611], 99.50th=[ 6678], 99.90th=[ 6678], 99.95th=[ 6678], 00:19:09.513 | 99.99th=[ 6678] 00:19:09.513 bw ( KiB/s): min= 2052, max=524288, per=9.49%, avg=239272.23, stdev=197645.63, samples=13 00:19:09.513 iops : min= 2, max= 512, avg=233.62, stdev=193.03, samples=13 00:19:09.513 lat (msec) : 250=11.30%, 500=44.84%, 750=27.22%, 1000=7.35%, >=2000=9.30% 00:19:09.513 cpu : usr=0.06%, sys=1.61%, ctx=1490, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=1.0%, 32=1.9%, >=64=96.2% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.513 issued rwts: total=1646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.513 job2: (groupid=0, jobs=1): err= 0: pid=2582919: Thu Jul 25 10:07:53 2024 00:19:09.513 read: IOPS=122, BW=123MiB/s (129MB/s)(1231MiB/10028msec) 00:19:09.513 slat (usec): min=40, max=87938, avg=8118.75, stdev=12513.85 00:19:09.513 clat (msec): min=27, max=1959, avg=953.78, stdev=555.87 00:19:09.513 lat (msec): min=29, max=1965, avg=961.90, stdev=559.79 00:19:09.513 clat percentiles (msec): 00:19:09.513 | 1.00th=[ 71], 5.00th=[ 355], 10.00th=[ 409], 20.00th=[ 439], 00:19:09.513 | 30.00th=[ 485], 40.00th=[ 567], 50.00th=[ 818], 60.00th=[ 1020], 00:19:09.513 | 70.00th=[ 1301], 80.00th=[ 1703], 90.00th=[ 1821], 95.00th=[ 1871], 00:19:09.513 | 99.00th=[ 1955], 99.50th=[ 1955], 99.90th=[ 1955], 99.95th=[ 1955], 00:19:09.513 | 99.99th=[ 1955] 00:19:09.513 bw ( KiB/s): min=55296, max=315392, per=5.27%, avg=132886.06, stdev=81684.02, samples=17 00:19:09.513 iops : min= 54, max= 308, avg=129.65, stdev=79.70, samples=17 00:19:09.513 lat (msec) : 50=0.57%, 100=0.89%, 250=2.27%, 500=28.43%, 750=14.62% 00:19:09.513 lat (msec) : 1000=11.29%, 2000=41.92% 00:19:09.513 cpu : usr=0.11%, sys=1.76%, ctx=2193, majf=0, minf=32769 00:19:09.513 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.6%, >=64=94.9% 00:19:09.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.513 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.513 issued rwts: total=1231,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582920: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=1, BW=1741KiB/s (1783kB/s)(22.0MiB/12937msec) 00:19:09.514 slat (usec): min=775, max=2091.6k, avg=491380.22, stdev=881627.38 00:19:09.514 clat (msec): min=2125, max=12935, avg=9680.46, stdev=3643.75 00:19:09.514 lat (msec): min=4217, max=12936, avg=10171.84, stdev=3288.01 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:09.514 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:19:09.514 | 70.00th=[12684], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:09.514 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.514 | 99.99th=[12953] 00:19:09.514 lat (msec) : >=2000=100.00% 00:19:09.514 cpu : usr=0.00%, sys=0.13%, ctx=61, majf=0, minf=5633 00:19:09.514 IO depths : 1=4.5%, 2=9.1%, 4=18.2%, 8=36.4%, 16=31.8%, 32=0.0%, >=64=0.0% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.514 issued rwts: total=22,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582921: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=44, BW=44.1MiB/s (46.3MB/s)(620MiB/14052msec) 00:19:09.514 slat (usec): min=47, max=2114.0k, avg=19194.22, stdev=159795.36 00:19:09.514 clat (msec): min=505, max=10646, avg=2799.26, stdev=2381.09 00:19:09.514 lat (msec): min=507, max=12716, avg=2818.45, stdev=2397.24 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 510], 5.00th=[ 518], 10.00th=[ 518], 20.00th=[ 558], 00:19:09.514 | 30.00th=[ 793], 40.00th=[ 927], 50.00th=[ 1469], 60.00th=[ 4665], 00:19:09.514 | 70.00th=[ 4866], 80.00th=[ 5067], 90.00th=[ 6074], 95.00th=[ 6275], 00:19:09.514 | 99.00th=[ 7148], 99.50th=[ 7215], 99.90th=[10671], 99.95th=[10671], 00:19:09.514 | 99.99th=[10671] 00:19:09.514 bw ( KiB/s): min= 2052, max=253952, per=4.00%, avg=100867.80, stdev=92372.87, samples=10 00:19:09.514 iops : min= 2, max= 248, avg=98.30, stdev=90.08, samples=10 00:19:09.514 lat (msec) : 750=29.84%, 1000=11.77%, 2000=16.77%, >=2000=41.61% 00:19:09.514 cpu : usr=0.01%, sys=0.98%, ctx=757, majf=0, minf=32769 00:19:09.514 IO depths : 1=0.2%, 2=0.3%, 4=0.6%, 8=1.3%, 16=2.6%, 32=5.2%, >=64=89.8% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.514 issued rwts: total=620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582922: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=3, BW=3383KiB/s (3465kB/s)(43.0MiB/13014msec) 00:19:09.514 slat (usec): min=583, max=2141.3k, avg=253155.65, stdev=679642.51 00:19:09.514 clat (msec): min=2127, max=13012, avg=11904.95, stdev=2587.43 00:19:09.514 lat (msec): min=4268, max=13013, avg=12158.11, stdev=2093.38 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 2123], 5.00th=[ 6342], 10.00th=[ 8490], 20.00th=[12818], 00:19:09.514 | 30.00th=[12953], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:19:09.514 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.514 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.514 | 99.99th=[12953] 00:19:09.514 lat (msec) : >=2000=100.00% 00:19:09.514 cpu : usr=0.00%, sys=0.20%, ctx=93, majf=0, minf=11009 00:19:09.514 IO depths : 1=2.3%, 2=4.7%, 4=9.3%, 8=18.6%, 16=37.2%, 32=27.9%, >=64=0.0% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.514 issued rwts: total=43,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582923: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=3, BW=3225KiB/s (3302kB/s)(41.0MiB/13018msec) 00:19:09.514 slat (usec): min=653, max=2096.3k, avg=265544.58, stdev=689738.24 00:19:09.514 clat (msec): min=2129, max=13016, avg=11213.53, stdev=3277.87 00:19:09.514 lat (msec): min=4218, max=13017, avg=11479.07, stdev=2948.00 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 8557], 00:19:09.514 | 30.00th=[12818], 40.00th=[12953], 50.00th=[12953], 60.00th=[12953], 00:19:09.514 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.514 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.514 | 99.99th=[12953] 00:19:09.514 lat (msec) : >=2000=100.00% 00:19:09.514 cpu : usr=0.00%, sys=0.18%, ctx=91, majf=0, minf=10497 00:19:09.514 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.514 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582924: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=1, BW=1577KiB/s (1615kB/s)(20.0MiB/12988msec) 00:19:09.514 slat (usec): min=1348, max=2107.8k, avg=542888.63, stdev=922354.02 00:19:09.514 clat (msec): min=2130, max=12984, avg=10196.34, stdev=3646.18 00:19:09.514 lat (msec): min=4218, max=12987, avg=10739.23, stdev=3157.54 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 6342], 00:19:09.514 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:19:09.514 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.514 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.514 | 99.99th=[12953] 00:19:09.514 lat (msec) : >=2000=100.00% 00:19:09.514 cpu : usr=0.00%, sys=0.09%, ctx=65, majf=0, minf=5121 00:19:09.514 IO depths : 1=5.0%, 2=10.0%, 4=20.0%, 8=40.0%, 16=25.0%, 32=0.0%, >=64=0.0% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.514 issued rwts: total=20,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582925: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=142, BW=142MiB/s (149MB/s)(1427MiB/10026msec) 00:19:09.514 slat (usec): min=47, max=90228, avg=7004.10, stdev=11597.87 00:19:09.514 clat (msec): min=25, max=1455, avg=818.83, stdev=284.05 00:19:09.514 lat (msec): min=26, max=1464, avg=825.83, stdev=286.09 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 51], 5.00th=[ 300], 10.00th=[ 472], 20.00th=[ 584], 00:19:09.514 | 30.00th=[ 651], 40.00th=[ 735], 50.00th=[ 785], 60.00th=[ 969], 00:19:09.514 | 70.00th=[ 1062], 80.00th=[ 1070], 90.00th=[ 1116], 95.00th=[ 1183], 00:19:09.514 | 99.00th=[ 1368], 99.50th=[ 1418], 99.90th=[ 1452], 99.95th=[ 1452], 00:19:09.514 | 99.99th=[ 1452] 00:19:09.514 bw ( KiB/s): min= 8208, max=270336, per=5.87%, avg=147926.33, stdev=57006.85, samples=18 00:19:09.514 iops : min= 8, max= 264, avg=144.44, stdev=55.68, samples=18 00:19:09.514 lat (msec) : 50=0.91%, 100=1.26%, 250=2.03%, 500=7.43%, 750=32.03% 00:19:09.514 lat (msec) : 1000=20.46%, 2000=35.88% 00:19:09.514 cpu : usr=0.03%, sys=1.76%, ctx=2274, majf=0, minf=32769 00:19:09.514 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.514 issued rwts: total=1427,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582926: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=17, BW=17.9MiB/s (18.7MB/s)(213MiB/11922msec) 00:19:09.514 slat (usec): min=60, max=2130.2k, avg=47039.35, stdev=274491.53 00:19:09.514 clat (msec): min=583, max=10733, avg=5219.36, stdev=2562.27 00:19:09.514 lat (msec): min=583, max=11789, avg=5266.40, stdev=2589.65 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 634], 5.00th=[ 1955], 10.00th=[ 2022], 20.00th=[ 2089], 00:19:09.514 | 30.00th=[ 2165], 40.00th=[ 4245], 50.00th=[ 6275], 60.00th=[ 6275], 00:19:09.514 | 70.00th=[ 6342], 80.00th=[ 7953], 90.00th=[ 8154], 95.00th=[ 8154], 00:19:09.514 | 99.00th=[10671], 99.50th=[10671], 99.90th=[10671], 99.95th=[10671], 00:19:09.514 | 99.99th=[10671] 00:19:09.514 bw ( KiB/s): min= 2039, max=126976, per=1.16%, avg=29352.33, stdev=49045.35, samples=6 00:19:09.514 iops : min= 1, max= 124, avg=28.33, stdev=48.12, samples=6 00:19:09.514 lat (msec) : 750=1.41%, 2000=7.51%, >=2000=91.08% 00:19:09.514 cpu : usr=0.00%, sys=0.80%, ctx=202, majf=0, minf=32769 00:19:09.514 IO depths : 1=0.5%, 2=0.9%, 4=1.9%, 8=3.8%, 16=7.5%, 32=15.0%, >=64=70.4% 00:19:09.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.514 complete : 0=0.0%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.1% 00:19:09.514 issued rwts: total=213,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.514 job2: (groupid=0, jobs=1): err= 0: pid=2582927: Thu Jul 25 10:07:53 2024 00:19:09.514 read: IOPS=24, BW=25.0MiB/s (26.2MB/s)(351MiB/14068msec) 00:19:09.514 slat (usec): min=55, max=2120.1k, avg=33951.89, stdev=228289.33 00:19:09.514 clat (msec): min=610, max=13831, avg=4683.25, stdev=4799.04 00:19:09.514 lat (msec): min=618, max=13836, avg=4717.21, stdev=4812.44 00:19:09.514 clat percentiles (msec): 00:19:09.514 | 1.00th=[ 642], 5.00th=[ 651], 10.00th=[ 684], 20.00th=[ 760], 00:19:09.514 | 30.00th=[ 827], 40.00th=[ 911], 50.00th=[ 1955], 60.00th=[ 2106], 00:19:09.514 | 70.00th=[10939], 80.00th=[11073], 90.00th=[11208], 95.00th=[11342], 00:19:09.514 | 99.00th=[11476], 99.50th=[12818], 99.90th=[13892], 99.95th=[13892], 00:19:09.514 | 99.99th=[13892] 00:19:09.514 bw ( KiB/s): min= 2052, max=188039, per=2.27%, avg=57293.75, stdev=76835.90, samples=8 00:19:09.514 iops : min= 2, max= 183, avg=55.75, stdev=74.96, samples=8 00:19:09.514 lat (msec) : 750=19.09%, 1000=30.48%, 2000=2.85%, >=2000=47.58% 00:19:09.514 cpu : usr=0.01%, sys=0.81%, ctx=338, majf=0, minf=32125 00:19:09.515 IO depths : 1=0.3%, 2=0.6%, 4=1.1%, 8=2.3%, 16=4.6%, 32=9.1%, >=64=82.1% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.4% 00:19:09.515 issued rwts: total=351,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job2: (groupid=0, jobs=1): err= 0: pid=2582928: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=1, BW=1317KiB/s (1348kB/s)(18.0MiB/13999msec) 00:19:09.515 slat (msec): min=6, max=4281, avg=658.41, stdev=1402.71 00:19:09.515 clat (msec): min=2146, max=13981, avg=10738.83, stdev=4231.01 00:19:09.515 lat (msec): min=4191, max=13997, avg=11397.24, stdev=3704.65 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 2140], 5.00th=[ 2140], 10.00th=[ 4178], 20.00th=[ 4245], 00:19:09.515 | 30.00th=[ 8490], 40.00th=[12818], 50.00th=[12818], 60.00th=[12818], 00:19:09.515 | 70.00th=[13892], 80.00th=[13892], 90.00th=[13892], 95.00th=[14026], 00:19:09.515 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.515 | 99.99th=[14026] 00:19:09.515 lat (msec) : >=2000=100.00% 00:19:09.515 cpu : usr=0.00%, sys=0.10%, ctx=64, majf=0, minf=4609 00:19:09.515 IO depths : 1=5.6%, 2=11.1%, 4=22.2%, 8=44.4%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.515 issued rwts: total=18,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job2: (groupid=0, jobs=1): err= 0: pid=2582929: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=4, BW=4519KiB/s (4627kB/s)(62.0MiB/14049msec) 00:19:09.515 slat (usec): min=722, max=2127.5k, avg=191893.29, stdev=579677.92 00:19:09.515 clat (msec): min=2150, max=14047, avg=10639.72, stdev=3836.07 00:19:09.515 lat (msec): min=4196, max=14048, avg=10831.61, stdev=3699.60 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 2165], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:09.515 | 30.00th=[ 8490], 40.00th=[10671], 50.00th=[12818], 60.00th=[12818], 00:19:09.515 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.515 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.515 | 99.99th=[14026] 00:19:09.515 lat (msec) : >=2000=100.00% 00:19:09.515 cpu : usr=0.01%, sys=0.36%, ctx=65, majf=0, minf=15873 00:19:09.515 IO depths : 1=1.6%, 2=3.2%, 4=6.5%, 8=12.9%, 16=25.8%, 32=50.0%, >=64=0.0% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.515 issued rwts: total=62,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job2: (groupid=0, jobs=1): err= 0: pid=2582930: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=3, BW=3556KiB/s (3641kB/s)(45.0MiB/12959msec) 00:19:09.515 slat (usec): min=722, max=2087.1k, avg=240703.43, stdev=649879.68 00:19:09.515 clat (msec): min=2126, max=12956, avg=10756.55, stdev=3233.45 00:19:09.515 lat (msec): min=4211, max=12958, avg=10997.25, stdev=2968.75 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 2123], 5.00th=[ 4245], 10.00th=[ 4329], 20.00th=[ 6409], 00:19:09.515 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12818], 60.00th=[12818], 00:19:09.515 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.515 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.515 | 99.99th=[12953] 00:19:09.515 lat (msec) : >=2000=100.00% 00:19:09.515 cpu : usr=0.00%, sys=0.26%, ctx=75, majf=0, minf=11521 00:19:09.515 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.515 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job3: (groupid=0, jobs=1): err= 0: pid=2582933: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=2, BW=2701KiB/s (2765kB/s)(34.0MiB/12892msec) 00:19:09.515 slat (usec): min=630, max=3087.1k, avg=316927.77, stdev=784088.91 00:19:09.515 clat (msec): min=2115, max=12821, avg=10358.06, stdev=3558.06 00:19:09.515 lat (msec): min=4201, max=12891, avg=10674.98, stdev=3269.89 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 6342], 00:19:09.515 | 30.00th=[ 8557], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:19:09.515 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:09.515 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:09.515 | 99.99th=[12818] 00:19:09.515 lat (msec) : >=2000=100.00% 00:19:09.515 cpu : usr=0.00%, sys=0.18%, ctx=59, majf=0, minf=8705 00:19:09.515 IO depths : 1=2.9%, 2=5.9%, 4=11.8%, 8=23.5%, 16=47.1%, 32=8.8%, >=64=0.0% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.515 issued rwts: total=34,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job3: (groupid=0, jobs=1): err= 0: pid=2582934: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=1, BW=1680KiB/s (1720kB/s)(23.0MiB/14021msec) 00:19:09.515 slat (usec): min=795, max=2133.5k, avg=516243.50, stdev=878366.25 00:19:09.515 clat (msec): min=2146, max=14018, avg=9764.71, stdev=4249.31 00:19:09.515 lat (msec): min=4198, max=14020, avg=10280.95, stdev=3995.43 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4212], 20.00th=[ 4279], 00:19:09.515 | 30.00th=[ 6409], 40.00th=[ 8490], 50.00th=[10671], 60.00th=[12818], 00:19:09.515 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.515 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.515 | 99.99th=[14026] 00:19:09.515 lat (msec) : >=2000=100.00% 00:19:09.515 cpu : usr=0.00%, sys=0.13%, ctx=63, majf=0, minf=5889 00:19:09.515 IO depths : 1=4.3%, 2=8.7%, 4=17.4%, 8=34.8%, 16=34.8%, 32=0.0%, >=64=0.0% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.515 issued rwts: total=23,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job3: (groupid=0, jobs=1): err= 0: pid=2582935: Thu Jul 25 10:07:53 2024 00:19:09.515 read: IOPS=23, BW=23.1MiB/s (24.3MB/s)(299MiB/12918msec) 00:19:09.515 slat (usec): min=565, max=2121.2k, avg=36096.84, stdev=207262.57 00:19:09.515 clat (msec): min=970, max=11782, avg=3537.10, stdev=2795.92 00:19:09.515 lat (msec): min=977, max=12755, avg=3573.20, stdev=2834.09 00:19:09.515 clat percentiles (msec): 00:19:09.515 | 1.00th=[ 978], 5.00th=[ 995], 10.00th=[ 995], 20.00th=[ 1011], 00:19:09.515 | 30.00th=[ 1028], 40.00th=[ 1070], 50.00th=[ 1385], 60.00th=[ 5738], 00:19:09.515 | 70.00th=[ 6409], 80.00th=[ 6678], 90.00th=[ 6946], 95.00th=[ 7080], 00:19:09.515 | 99.00th=[10671], 99.50th=[10671], 99.90th=[11745], 99.95th=[11745], 00:19:09.515 | 99.99th=[11745] 00:19:09.515 bw ( KiB/s): min= 1402, max=136942, per=1.99%, avg=50181.43, stdev=55648.03, samples=7 00:19:09.515 iops : min= 1, max= 133, avg=48.71, stdev=54.26, samples=7 00:19:09.515 lat (msec) : 1000=13.04%, 2000=41.81%, >=2000=45.15% 00:19:09.515 cpu : usr=0.02%, sys=0.84%, ctx=770, majf=0, minf=32769 00:19:09.515 IO depths : 1=0.3%, 2=0.7%, 4=1.3%, 8=2.7%, 16=5.4%, 32=10.7%, >=64=78.9% 00:19:09.515 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.515 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:09.515 issued rwts: total=299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.515 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.515 job3: (groupid=0, jobs=1): err= 0: pid=2582936: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=4, BW=4130KiB/s (4229kB/s)(52.0MiB/12894msec) 00:19:09.516 slat (usec): min=717, max=2070.3k, avg=207259.04, stdev=609020.12 00:19:09.516 clat (msec): min=2115, max=12890, avg=9772.90, stdev=3278.44 00:19:09.516 lat (msec): min=4185, max=12893, avg=9980.16, stdev=3121.78 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:09.516 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12684], 00:19:09.516 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12953], 00:19:09.516 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.516 | 99.99th=[12953] 00:19:09.516 lat (msec) : >=2000=100.00% 00:19:09.516 cpu : usr=0.00%, sys=0.32%, ctx=66, majf=0, minf=13313 00:19:09.516 IO depths : 1=1.9%, 2=3.8%, 4=7.7%, 8=15.4%, 16=30.8%, 32=40.4%, >=64=0.0% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.516 issued rwts: total=52,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582937: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=1, BW=1030KiB/s (1054kB/s)(13.0MiB/12927msec) 00:19:09.516 slat (msec): min=5, max=3162, avg=831.31, stdev=1137.34 00:19:09.516 clat (msec): min=2119, max=12804, avg=8355.05, stdev=4021.73 00:19:09.516 lat (msec): min=4199, max=12926, avg=9186.36, stdev=3731.99 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4212], 20.00th=[ 4245], 00:19:09.516 | 30.00th=[ 4245], 40.00th=[ 7416], 50.00th=[ 9597], 60.00th=[11745], 00:19:09.516 | 70.00th=[11745], 80.00th=[11745], 90.00th=[12818], 95.00th=[12818], 00:19:09.516 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:09.516 | 99.99th=[12818] 00:19:09.516 lat (msec) : >=2000=100.00% 00:19:09.516 cpu : usr=0.00%, sys=0.07%, ctx=53, majf=0, minf=3329 00:19:09.516 IO depths : 1=7.7%, 2=15.4%, 4=30.8%, 8=46.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=100.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 issued rwts: total=13,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582938: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=220, BW=220MiB/s (231MB/s)(2225MiB/10093msec) 00:19:09.516 slat (usec): min=47, max=84542, avg=4510.53, stdev=9596.55 00:19:09.516 clat (msec): min=49, max=949, avg=553.27, stdev=134.77 00:19:09.516 lat (msec): min=98, max=950, avg=557.78, stdev=135.72 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 169], 5.00th=[ 409], 10.00th=[ 418], 20.00th=[ 439], 00:19:09.516 | 30.00th=[ 464], 40.00th=[ 498], 50.00th=[ 535], 60.00th=[ 567], 00:19:09.516 | 70.00th=[ 667], 80.00th=[ 676], 90.00th=[ 709], 95.00th=[ 735], 00:19:09.516 | 99.00th=[ 894], 99.50th=[ 902], 99.90th=[ 953], 99.95th=[ 953], 00:19:09.516 | 99.99th=[ 953] 00:19:09.516 bw ( KiB/s): min=55296, max=311296, per=8.96%, avg=225908.58, stdev=61457.59, samples=19 00:19:09.516 iops : min= 54, max= 304, avg=220.58, stdev=60.04, samples=19 00:19:09.516 lat (msec) : 50=0.04%, 100=0.04%, 250=2.02%, 500=38.47%, 750=54.97% 00:19:09.516 lat (msec) : 1000=4.45% 00:19:09.516 cpu : usr=0.04%, sys=2.50%, ctx=2031, majf=0, minf=32769 00:19:09.516 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.516 issued rwts: total=2225,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582939: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=67, BW=67.7MiB/s (71.0MB/s)(878MiB/12972msec) 00:19:09.516 slat (usec): min=59, max=2097.5k, avg=11383.74, stdev=86734.75 00:19:09.516 clat (msec): min=604, max=7035, avg=1827.24, stdev=1198.18 00:19:09.516 lat (msec): min=608, max=7451, avg=1838.63, stdev=1214.08 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 609], 5.00th=[ 625], 10.00th=[ 642], 20.00th=[ 735], 00:19:09.516 | 30.00th=[ 802], 40.00th=[ 877], 50.00th=[ 961], 60.00th=[ 2735], 00:19:09.516 | 70.00th=[ 2903], 80.00th=[ 3239], 90.00th=[ 3306], 95.00th=[ 3440], 00:19:09.516 | 99.00th=[ 3708], 99.50th=[ 4212], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:09.516 | 99.99th=[ 7013] 00:19:09.516 bw ( KiB/s): min= 4096, max=204800, per=4.36%, avg=109860.57, stdev=70262.12, samples=14 00:19:09.516 iops : min= 4, max= 200, avg=107.29, stdev=68.62, samples=14 00:19:09.516 lat (msec) : 750=22.10%, 1000=29.04%, 2000=5.69%, >=2000=43.17% 00:19:09.516 cpu : usr=0.02%, sys=1.60%, ctx=1421, majf=0, minf=32769 00:19:09.516 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.8% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.516 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582940: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=212, BW=212MiB/s (223MB/s)(2131MiB/10040msec) 00:19:09.516 slat (usec): min=43, max=40871, avg=4685.21, stdev=5306.10 00:19:09.516 clat (msec): min=38, max=1676, avg=565.10, stdev=245.02 00:19:09.516 lat (msec): min=42, max=1696, avg=569.79, stdev=246.79 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 97], 5.00th=[ 326], 10.00th=[ 393], 20.00th=[ 422], 00:19:09.516 | 30.00th=[ 435], 40.00th=[ 477], 50.00th=[ 531], 60.00th=[ 550], 00:19:09.516 | 70.00th=[ 659], 80.00th=[ 676], 90.00th=[ 693], 95.00th=[ 1053], 00:19:09.516 | 99.00th=[ 1620], 99.50th=[ 1653], 99.90th=[ 1670], 99.95th=[ 1670], 00:19:09.516 | 99.99th=[ 1670] 00:19:09.516 bw ( KiB/s): min=30720, max=350208, per=9.04%, avg=227980.67, stdev=76589.16, samples=18 00:19:09.516 iops : min= 30, max= 342, avg=222.61, stdev=74.78, samples=18 00:19:09.516 lat (msec) : 50=0.09%, 100=0.94%, 250=2.67%, 500=41.30%, 750=47.25% 00:19:09.516 lat (msec) : 1000=2.16%, 2000=5.58% 00:19:09.516 cpu : usr=0.12%, sys=3.62%, ctx=2007, majf=0, minf=32769 00:19:09.516 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.516 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582941: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=79, BW=79.1MiB/s (82.9MB/s)(1019MiB/12887msec) 00:19:09.516 slat (usec): min=49, max=2100.6k, avg=10558.17, stdev=92641.48 00:19:09.516 clat (msec): min=593, max=6419, avg=1545.96, stdev=1405.01 00:19:09.516 lat (msec): min=597, max=8500, avg=1556.52, stdev=1416.15 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 609], 5.00th=[ 617], 10.00th=[ 625], 20.00th=[ 651], 00:19:09.516 | 30.00th=[ 743], 40.00th=[ 818], 50.00th=[ 860], 60.00th=[ 919], 00:19:09.516 | 70.00th=[ 961], 80.00th=[ 2869], 90.00th=[ 4463], 95.00th=[ 4866], 00:19:09.516 | 99.00th=[ 5067], 99.50th=[ 5134], 99.90th=[ 6409], 99.95th=[ 6409], 00:19:09.516 | 99.99th=[ 6409] 00:19:09.516 bw ( KiB/s): min= 2048, max=210944, per=4.83%, avg=121749.00, stdev=70773.89, samples=15 00:19:09.516 iops : min= 2, max= 206, avg=118.80, stdev=69.08, samples=15 00:19:09.516 lat (msec) : 750=30.23%, 1000=42.79%, 2000=1.96%, >=2000=25.02% 00:19:09.516 cpu : usr=0.06%, sys=1.53%, ctx=1219, majf=0, minf=32769 00:19:09.516 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.8%, 16=1.6%, 32=3.1%, >=64=93.8% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.516 issued rwts: total=1019,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582942: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=45, BW=45.7MiB/s (47.9MB/s)(588MiB/12865msec) 00:19:09.516 slat (usec): min=40, max=2136.7k, avg=17000.93, stdev=147679.25 00:19:09.516 clat (msec): min=482, max=8488, avg=1407.63, stdev=1297.67 00:19:09.516 lat (msec): min=483, max=8500, avg=1424.63, stdev=1339.85 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 518], 5.00th=[ 600], 10.00th=[ 701], 20.00th=[ 743], 00:19:09.516 | 30.00th=[ 760], 40.00th=[ 776], 50.00th=[ 793], 60.00th=[ 810], 00:19:09.516 | 70.00th=[ 835], 80.00th=[ 2903], 90.00th=[ 3104], 95.00th=[ 3306], 00:19:09.516 | 99.00th=[ 7282], 99.50th=[ 7349], 99.90th=[ 8490], 99.95th=[ 8490], 00:19:09.516 | 99.99th=[ 8490] 00:19:09.516 bw ( KiB/s): min=34816, max=257532, per=6.24%, avg=157268.67, stdev=71499.03, samples=6 00:19:09.516 iops : min= 34, max= 251, avg=153.50, stdev=69.68, samples=6 00:19:09.516 lat (msec) : 500=0.68%, 750=26.53%, 1000=48.81%, >=2000=23.98% 00:19:09.516 cpu : usr=0.03%, sys=0.89%, ctx=1054, majf=0, minf=32769 00:19:09.516 IO depths : 1=0.2%, 2=0.3%, 4=0.7%, 8=1.4%, 16=2.7%, 32=5.4%, >=64=89.3% 00:19:09.516 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.516 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.516 issued rwts: total=588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.516 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.516 job3: (groupid=0, jobs=1): err= 0: pid=2582943: Thu Jul 25 10:07:53 2024 00:19:09.516 read: IOPS=3, BW=3283KiB/s (3362kB/s)(45.0MiB/14036msec) 00:19:09.516 slat (usec): min=700, max=2135.3k, avg=264208.36, stdev=668225.10 00:19:09.516 clat (msec): min=2146, max=14034, avg=10723.12, stdev=3938.85 00:19:09.516 lat (msec): min=4187, max=14035, avg=10987.33, stdev=3744.41 00:19:09.516 clat percentiles (msec): 00:19:09.516 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:09.516 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[12818], 60.00th=[14026], 00:19:09.516 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.516 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.516 | 99.99th=[14026] 00:19:09.516 lat (msec) : >=2000=100.00% 00:19:09.516 cpu : usr=0.00%, sys=0.25%, ctx=79, majf=0, minf=11521 00:19:09.516 IO depths : 1=2.2%, 2=4.4%, 4=8.9%, 8=17.8%, 16=35.6%, 32=31.1%, >=64=0.0% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.517 issued rwts: total=45,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job3: (groupid=0, jobs=1): err= 0: pid=2582944: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=4, BW=4758KiB/s (4872kB/s)(60.0MiB/12913msec) 00:19:09.517 slat (usec): min=567, max=2107.2k, avg=179897.11, stdev=583018.71 00:19:09.517 clat (msec): min=2118, max=12911, avg=10065.09, stdev=3025.44 00:19:09.517 lat (msec): min=4225, max=12912, avg=10244.99, stdev=2861.35 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 6409], 00:19:09.517 | 30.00th=[ 8490], 40.00th=[ 8557], 50.00th=[10671], 60.00th=[12818], 00:19:09.517 | 70.00th=[12818], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.517 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.517 | 99.99th=[12953] 00:19:09.517 lat (msec) : >=2000=100.00% 00:19:09.517 cpu : usr=0.00%, sys=0.36%, ctx=59, majf=0, minf=15361 00:19:09.517 IO depths : 1=1.7%, 2=3.3%, 4=6.7%, 8=13.3%, 16=26.7%, 32=48.3%, >=64=0.0% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.517 issued rwts: total=60,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job3: (groupid=0, jobs=1): err= 0: pid=2582945: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=127, BW=128MiB/s (134MB/s)(1387MiB/10872msec) 00:19:09.517 slat (usec): min=34, max=2098.6k, avg=7769.16, stdev=72611.13 00:19:09.517 clat (msec): min=87, max=4321, avg=924.71, stdev=754.51 00:19:09.517 lat (msec): min=206, max=6020, avg=932.48, stdev=762.32 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 207], 5.00th=[ 209], 10.00th=[ 262], 20.00th=[ 313], 00:19:09.517 | 30.00th=[ 443], 40.00th=[ 659], 50.00th=[ 667], 60.00th=[ 684], 00:19:09.517 | 70.00th=[ 726], 80.00th=[ 1536], 90.00th=[ 2400], 95.00th=[ 2433], 00:19:09.517 | 99.00th=[ 2467], 99.50th=[ 2500], 99.90th=[ 4279], 99.95th=[ 4329], 00:19:09.517 | 99.99th=[ 4329] 00:19:09.517 bw ( KiB/s): min=114688, max=489472, per=8.52%, avg=214839.00, stdev=102549.96, samples=12 00:19:09.517 iops : min= 112, max= 478, avg=209.75, stdev=100.17, samples=12 00:19:09.517 lat (msec) : 100=0.07%, 250=9.30%, 500=22.93%, 750=38.86%, 1000=3.03% 00:19:09.517 lat (msec) : 2000=7.50%, >=2000=18.31% 00:19:09.517 cpu : usr=0.08%, sys=2.22%, ctx=1464, majf=0, minf=32769 00:19:09.517 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.517 issued rwts: total=1387,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582951: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=23, BW=23.4MiB/s (24.5MB/s)(235MiB/10062msec) 00:19:09.517 slat (usec): min=556, max=2081.6k, avg=42584.09, stdev=231753.16 00:19:09.517 clat (msec): min=52, max=8888, avg=3908.00, stdev=3590.09 00:19:09.517 lat (msec): min=93, max=8892, avg=3950.58, stdev=3600.41 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 101], 5.00th=[ 184], 10.00th=[ 384], 20.00th=[ 592], 00:19:09.517 | 30.00th=[ 835], 40.00th=[ 1083], 50.00th=[ 1284], 60.00th=[ 7483], 00:19:09.517 | 70.00th=[ 7684], 80.00th=[ 8020], 90.00th=[ 8490], 95.00th=[ 8792], 00:19:09.517 | 99.00th=[ 8926], 99.50th=[ 8926], 99.90th=[ 8926], 99.95th=[ 8926], 00:19:09.517 | 99.99th=[ 8926] 00:19:09.517 bw ( KiB/s): min=36864, max=98501, per=2.90%, avg=73111.00, stdev=32220.89, samples=3 00:19:09.517 iops : min= 36, max= 96, avg=71.33, stdev=31.39, samples=3 00:19:09.517 lat (msec) : 100=0.85%, 250=6.38%, 500=8.09%, 750=11.06%, 1000=9.36% 00:19:09.517 lat (msec) : 2000=19.57%, >=2000=44.68% 00:19:09.517 cpu : usr=0.01%, sys=0.90%, ctx=690, majf=0, minf=32769 00:19:09.517 IO depths : 1=0.4%, 2=0.9%, 4=1.7%, 8=3.4%, 16=6.8%, 32=13.6%, >=64=73.2% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=99.1%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.9% 00:19:09.517 issued rwts: total=235,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582952: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=2, BW=2854KiB/s (2923kB/s)(39.0MiB/13992msec) 00:19:09.517 slat (usec): min=1718, max=2117.3k, avg=303758.30, stdev=705211.76 00:19:09.517 clat (msec): min=2144, max=13989, avg=9156.97, stdev=3698.64 00:19:09.517 lat (msec): min=4184, max=13991, avg=9460.73, stdev=3592.51 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:09.517 | 30.00th=[ 6409], 40.00th=[ 8423], 50.00th=[ 8490], 60.00th=[10671], 00:19:09.517 | 70.00th=[12818], 80.00th=[12818], 90.00th=[13892], 95.00th=[14026], 00:19:09.517 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.517 | 99.99th=[14026] 00:19:09.517 lat (msec) : >=2000=100.00% 00:19:09.517 cpu : usr=0.00%, sys=0.21%, ctx=85, majf=0, minf=9985 00:19:09.517 IO depths : 1=2.6%, 2=5.1%, 4=10.3%, 8=20.5%, 16=41.0%, 32=20.5%, >=64=0.0% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.517 issued rwts: total=39,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582953: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=28, BW=28.4MiB/s (29.8MB/s)(285MiB/10034msec) 00:19:09.517 slat (usec): min=109, max=2098.2k, avg=35113.35, stdev=211385.03 00:19:09.517 clat (msec): min=24, max=8215, avg=4082.32, stdev=3475.42 00:19:09.517 lat (msec): min=36, max=8217, avg=4117.44, stdev=3479.55 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 37], 5.00th=[ 134], 10.00th=[ 334], 20.00th=[ 642], 00:19:09.517 | 30.00th=[ 978], 40.00th=[ 1284], 50.00th=[ 1821], 60.00th=[ 7684], 00:19:09.517 | 70.00th=[ 7953], 80.00th=[ 8087], 90.00th=[ 8154], 95.00th=[ 8154], 00:19:09.517 | 99.00th=[ 8221], 99.50th=[ 8221], 99.90th=[ 8221], 99.95th=[ 8221], 00:19:09.517 | 99.99th=[ 8221] 00:19:09.517 bw ( KiB/s): min= 4096, max=102400, per=1.60%, avg=40448.00, stdev=39130.81, samples=8 00:19:09.517 iops : min= 4, max= 100, avg=39.50, stdev=38.21, samples=8 00:19:09.517 lat (msec) : 50=1.40%, 100=2.81%, 250=3.51%, 500=7.37%, 750=8.07% 00:19:09.517 lat (msec) : 1000=8.07%, 2000=21.05%, >=2000=47.72% 00:19:09.517 cpu : usr=0.03%, sys=0.91%, ctx=713, majf=0, minf=32769 00:19:09.517 IO depths : 1=0.4%, 2=0.7%, 4=1.4%, 8=2.8%, 16=5.6%, 32=11.2%, >=64=77.9% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.6% 00:19:09.517 issued rwts: total=285,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582954: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=2, BW=3001KiB/s (3073kB/s)(41.0MiB/13988msec) 00:19:09.517 slat (usec): min=743, max=2114.0k, avg=288873.48, stdev=686495.77 00:19:09.517 clat (msec): min=2143, max=13986, avg=10491.35, stdev=3976.83 00:19:09.517 lat (msec): min=4171, max=13987, avg=10780.22, stdev=3780.66 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4245], 20.00th=[ 6342], 00:19:09.517 | 30.00th=[ 8423], 40.00th=[10671], 50.00th=[12818], 60.00th=[13758], 00:19:09.517 | 70.00th=[13892], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.517 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.517 | 99.99th=[14026] 00:19:09.517 lat (msec) : >=2000=100.00% 00:19:09.517 cpu : usr=0.00%, sys=0.22%, ctx=94, majf=0, minf=10497 00:19:09.517 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.517 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582955: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=18, BW=18.8MiB/s (19.7MB/s)(189MiB/10058msec) 00:19:09.517 slat (usec): min=784, max=2090.5k, avg=52910.70, stdev=258273.63 00:19:09.517 clat (msec): min=56, max=9177, avg=3439.20, stdev=3609.70 00:19:09.517 lat (msec): min=61, max=9277, avg=3492.11, stdev=3631.50 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 62], 5.00th=[ 138], 10.00th=[ 259], 20.00th=[ 558], 00:19:09.517 | 30.00th=[ 735], 40.00th=[ 927], 50.00th=[ 1116], 60.00th=[ 1318], 00:19:09.517 | 70.00th=[ 7819], 80.00th=[ 8154], 90.00th=[ 8658], 95.00th=[ 8926], 00:19:09.517 | 99.00th=[ 9060], 99.50th=[ 9194], 99.90th=[ 9194], 99.95th=[ 9194], 00:19:09.517 | 99.99th=[ 9194] 00:19:09.517 bw ( KiB/s): min=51097, max=75776, per=2.52%, avg=63436.50, stdev=17450.69, samples=2 00:19:09.517 iops : min= 49, max= 74, avg=61.50, stdev=17.68, samples=2 00:19:09.517 lat (msec) : 100=2.65%, 250=6.88%, 500=7.94%, 750=14.29%, 1000=12.17% 00:19:09.517 lat (msec) : 2000=19.05%, >=2000=37.04% 00:19:09.517 cpu : usr=0.02%, sys=0.85%, ctx=697, majf=0, minf=32769 00:19:09.517 IO depths : 1=0.5%, 2=1.1%, 4=2.1%, 8=4.2%, 16=8.5%, 32=16.9%, >=64=66.7% 00:19:09.517 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.517 complete : 0=0.0%, 4=98.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.6% 00:19:09.517 issued rwts: total=189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.517 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.517 job4: (groupid=0, jobs=1): err= 0: pid=2582956: Thu Jul 25 10:07:53 2024 00:19:09.517 read: IOPS=2, BW=2988KiB/s (3060kB/s)(41.0MiB/14050msec) 00:19:09.517 slat (usec): min=395, max=2170.7k, avg=290436.97, stdev=694736.17 00:19:09.517 clat (msec): min=2141, max=14047, avg=12191.67, stdev=3272.24 00:19:09.517 lat (msec): min=4213, max=14049, avg=12482.10, stdev=2860.47 00:19:09.517 clat percentiles (msec): 00:19:09.517 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[12684], 00:19:09.517 | 30.00th=[12684], 40.00th=[12818], 50.00th=[13892], 60.00th=[14026], 00:19:09.517 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.517 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.518 | 99.99th=[14026] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.19%, ctx=108, majf=0, minf=10497 00:19:09.518 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.518 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582957: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=1, BW=1513KiB/s (1549kB/s)(19.0MiB/12859msec) 00:19:09.518 slat (msec): min=7, max=2116, avg=565.32, stdev=926.03 00:19:09.518 clat (msec): min=2117, max=12798, avg=7051.01, stdev=2858.39 00:19:09.518 lat (msec): min=4181, max=12858, avg=7616.33, stdev=2890.35 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2123], 5.00th=[ 2123], 10.00th=[ 4178], 20.00th=[ 4212], 00:19:09.518 | 30.00th=[ 4279], 40.00th=[ 6342], 50.00th=[ 6409], 60.00th=[ 8490], 00:19:09.518 | 70.00th=[ 8557], 80.00th=[10671], 90.00th=[10671], 95.00th=[12818], 00:19:09.518 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:09.518 | 99.99th=[12818] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.11%, ctx=49, majf=0, minf=4865 00:19:09.518 IO depths : 1=5.3%, 2=10.5%, 4=21.1%, 8=42.1%, 16=21.1%, 32=0.0%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.518 issued rwts: total=19,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582958: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=4, BW=4427KiB/s (4533kB/s)(56.0MiB/12953msec) 00:19:09.518 slat (usec): min=474, max=2086.0k, avg=193137.14, stdev=582663.97 00:19:09.518 clat (msec): min=2136, max=12951, avg=10565.09, stdev=3050.38 00:19:09.518 lat (msec): min=4222, max=12952, avg=10758.23, stdev=2842.33 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[ 8490], 00:19:09.518 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[12684], 60.00th=[12684], 00:19:09.518 | 70.00th=[12953], 80.00th=[12953], 90.00th=[12953], 95.00th=[12953], 00:19:09.518 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.518 | 99.99th=[12953] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.34%, ctx=88, majf=0, minf=14337 00:19:09.518 IO depths : 1=1.8%, 2=3.6%, 4=7.1%, 8=14.3%, 16=28.6%, 32=44.6%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.518 issued rwts: total=56,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582959: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=3, BW=3501KiB/s (3585kB/s)(44.0MiB/12869msec) 00:19:09.518 slat (usec): min=684, max=2070.8k, avg=244258.61, stdev=651042.45 00:19:09.518 clat (msec): min=2120, max=12860, avg=10529.53, stdev=3303.13 00:19:09.518 lat (msec): min=4179, max=12868, avg=10773.78, stdev=3054.92 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2123], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:09.518 | 30.00th=[10671], 40.00th=[12684], 50.00th=[12684], 60.00th=[12818], 00:19:09.518 | 70.00th=[12818], 80.00th=[12818], 90.00th=[12818], 95.00th=[12818], 00:19:09.518 | 99.00th=[12818], 99.50th=[12818], 99.90th=[12818], 99.95th=[12818], 00:19:09.518 | 99.99th=[12818] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.25%, ctx=57, majf=0, minf=11265 00:19:09.518 IO depths : 1=2.3%, 2=4.5%, 4=9.1%, 8=18.2%, 16=36.4%, 32=29.5%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.518 issued rwts: total=44,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582960: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=6, BW=6336KiB/s (6488kB/s)(80.0MiB/12929msec) 00:19:09.518 slat (usec): min=572, max=3129.3k, avg=134923.80, stdev=523726.02 00:19:09.518 clat (msec): min=2134, max=12910, avg=11446.93, stdev=2665.38 00:19:09.518 lat (msec): min=4191, max=12928, avg=11581.85, stdev=2452.74 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2140], 5.00th=[ 4245], 10.00th=[ 6342], 20.00th=[11745], 00:19:09.518 | 30.00th=[12281], 40.00th=[12416], 50.00th=[12416], 60.00th=[12550], 00:19:09.518 | 70.00th=[12550], 80.00th=[12684], 90.00th=[12684], 95.00th=[12818], 00:19:09.518 | 99.00th=[12953], 99.50th=[12953], 99.90th=[12953], 99.95th=[12953], 00:19:09.518 | 99.99th=[12953] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.46%, ctx=115, majf=0, minf=20481 00:19:09.518 IO depths : 1=1.2%, 2=2.5%, 4=5.0%, 8=10.0%, 16=20.0%, 32=40.0%, >=64=21.3% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:09.518 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582961: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=1, BW=1540KiB/s (1577kB/s)(21.0MiB/13966msec) 00:19:09.518 slat (msec): min=3, max=2133, avg=562.90, stdev=899.31 00:19:09.518 clat (msec): min=2144, max=13945, avg=10091.82, stdev=4055.98 00:19:09.518 lat (msec): min=4195, max=13965, avg=10654.72, stdev=3702.70 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 4279], 20.00th=[ 6342], 00:19:09.518 | 30.00th=[ 8490], 40.00th=[ 8490], 50.00th=[12818], 60.00th=[12818], 00:19:09.518 | 70.00th=[13758], 80.00th=[13758], 90.00th=[13892], 95.00th=[13892], 00:19:09.518 | 99.00th=[13892], 99.50th=[13892], 99.90th=[13892], 99.95th=[13892], 00:19:09.518 | 99.99th=[13892] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.13%, ctx=83, majf=0, minf=5377 00:19:09.518 IO depths : 1=4.8%, 2=9.5%, 4=19.0%, 8=38.1%, 16=28.6%, 32=0.0%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.518 issued rwts: total=21,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582962: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=2, BW=2332KiB/s (2388kB/s)(32.0MiB/14049msec) 00:19:09.518 slat (usec): min=814, max=2133.2k, avg=371995.11, stdev=770876.11 00:19:09.518 clat (msec): min=2144, max=14046, avg=11562.15, stdev=3692.89 00:19:09.518 lat (msec): min=4195, max=14048, avg=11934.14, stdev=3291.30 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2140], 5.00th=[ 4212], 10.00th=[ 6342], 20.00th=[ 8490], 00:19:09.518 | 30.00th=[10671], 40.00th=[12818], 50.00th=[13892], 60.00th=[14026], 00:19:09.518 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.518 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.518 | 99.99th=[14026] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.19%, ctx=91, majf=0, minf=8193 00:19:09.518 IO depths : 1=3.1%, 2=6.2%, 4=12.5%, 8=25.0%, 16=50.0%, 32=3.1%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=100.0%, 64=0.0%, >=64=0.0% 00:19:09.518 issued rwts: total=32,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job4: (groupid=0, jobs=1): err= 0: pid=2582963: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=2, BW=2623KiB/s (2686kB/s)(36.0MiB/14056msec) 00:19:09.518 slat (usec): min=718, max=2162.3k, avg=330869.35, stdev=735848.14 00:19:09.518 clat (msec): min=2143, max=14054, avg=11557.42, stdev=3850.94 00:19:09.518 lat (msec): min=4186, max=14055, avg=11888.29, stdev=3516.18 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 2140], 5.00th=[ 4178], 10.00th=[ 4279], 20.00th=[ 6409], 00:19:09.518 | 30.00th=[10671], 40.00th=[13892], 50.00th=[14026], 60.00th=[14026], 00:19:09.518 | 70.00th=[14026], 80.00th=[14026], 90.00th=[14026], 95.00th=[14026], 00:19:09.518 | 99.00th=[14026], 99.50th=[14026], 99.90th=[14026], 99.95th=[14026], 00:19:09.518 | 99.99th=[14026] 00:19:09.518 lat (msec) : >=2000=100.00% 00:19:09.518 cpu : usr=0.00%, sys=0.21%, ctx=98, majf=0, minf=9217 00:19:09.518 IO depths : 1=2.8%, 2=5.6%, 4=11.1%, 8=22.2%, 16=44.4%, 32=13.9%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.518 issued rwts: total=36,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job5: (groupid=0, jobs=1): err= 0: pid=2582966: Thu Jul 25 10:07:53 2024 00:19:09.518 read: IOPS=2, BW=3015KiB/s (3088kB/s)(35.0MiB/11886msec) 00:19:09.518 slat (usec): min=709, max=3166.0k, avg=308059.10, stdev=822514.98 00:19:09.518 clat (msec): min=1103, max=11885, avg=9633.32, stdev=3234.19 00:19:09.518 lat (msec): min=2160, max=11885, avg=9941.38, stdev=2893.38 00:19:09.518 clat percentiles (msec): 00:19:09.518 | 1.00th=[ 1099], 5.00th=[ 2165], 10.00th=[ 5269], 20.00th=[ 5336], 00:19:09.518 | 30.00th=[ 8557], 40.00th=[10671], 50.00th=[10671], 60.00th=[11879], 00:19:09.518 | 70.00th=[11879], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:19:09.518 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:09.518 | 99.99th=[11879] 00:19:09.518 lat (msec) : 2000=2.86%, >=2000=97.14% 00:19:09.518 cpu : usr=0.00%, sys=0.22%, ctx=65, majf=0, minf=8961 00:19:09.518 IO depths : 1=2.9%, 2=5.7%, 4=11.4%, 8=22.9%, 16=45.7%, 32=11.4%, >=64=0.0% 00:19:09.518 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.518 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.518 issued rwts: total=35,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.518 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.518 job5: (groupid=0, jobs=1): err= 0: pid=2582967: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=240, BW=240MiB/s (252MB/s)(2411MiB/10043msec) 00:19:09.519 slat (usec): min=45, max=107213, avg=4146.43, stdev=10026.77 00:19:09.519 clat (msec): min=32, max=4758, avg=506.24, stdev=255.72 00:19:09.519 lat (msec): min=51, max=4759, avg=510.38, stdev=257.15 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 142], 5.00th=[ 213], 10.00th=[ 300], 20.00th=[ 388], 00:19:09.519 | 30.00th=[ 418], 40.00th=[ 443], 50.00th=[ 481], 60.00th=[ 514], 00:19:09.519 | 70.00th=[ 550], 80.00th=[ 617], 90.00th=[ 735], 95.00th=[ 835], 00:19:09.519 | 99.00th=[ 927], 99.50th=[ 936], 99.90th=[ 4665], 99.95th=[ 4732], 00:19:09.519 | 99.99th=[ 4732] 00:19:09.519 bw ( KiB/s): min=53248, max=512000, per=9.76%, avg=246159.68, stdev=99398.63, samples=19 00:19:09.519 iops : min= 52, max= 500, avg=240.32, stdev=97.15, samples=19 00:19:09.519 lat (msec) : 50=0.04%, 100=0.66%, 250=6.10%, 500=49.77%, 750=33.72% 00:19:09.519 lat (msec) : 1000=9.29%, >=2000=0.41% 00:19:09.519 cpu : usr=0.13%, sys=3.07%, ctx=2120, majf=0, minf=32769 00:19:09.519 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.519 issued rwts: total=2411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582968: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=13, BW=13.7MiB/s (14.4MB/s)(163MiB/11855msec) 00:19:09.519 slat (usec): min=483, max=2091.0k, avg=72243.98, stdev=356357.93 00:19:09.519 clat (msec): min=77, max=9321, avg=2909.43, stdev=1600.40 00:19:09.519 lat (msec): min=958, max=9402, avg=2981.67, stdev=1657.89 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 927], 5.00th=[ 986], 10.00th=[ 2106], 20.00th=[ 2265], 00:19:09.519 | 30.00th=[ 2433], 40.00th=[ 2567], 50.00th=[ 2668], 60.00th=[ 2769], 00:19:09.519 | 70.00th=[ 2903], 80.00th=[ 3037], 90.00th=[ 3104], 95.00th=[ 7215], 00:19:09.519 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:19:09.519 | 99.99th=[ 9329] 00:19:09.519 bw ( KiB/s): min=26624, max=44966, per=1.42%, avg=35795.00, stdev=12969.75, samples=2 00:19:09.519 iops : min= 26, max= 43, avg=34.50, stdev=12.02, samples=2 00:19:09.519 lat (msec) : 100=0.61%, 1000=6.13%, 2000=3.07%, >=2000=90.18% 00:19:09.519 cpu : usr=0.00%, sys=0.65%, ctx=335, majf=0, minf=32769 00:19:09.519 IO depths : 1=0.6%, 2=1.2%, 4=2.5%, 8=4.9%, 16=9.8%, 32=19.6%, >=64=61.3% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=97.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=2.7% 00:19:09.519 issued rwts: total=163,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582969: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=4, BW=4686KiB/s (4798kB/s)(46.0MiB/10052msec) 00:19:09.519 slat (usec): min=584, max=2127.4k, avg=217684.97, stdev=604901.98 00:19:09.519 clat (msec): min=37, max=9951, avg=2981.30, stdev=3498.82 00:19:09.519 lat (msec): min=55, max=10051, avg=3198.99, stdev=3620.98 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 38], 5.00th=[ 59], 10.00th=[ 65], 20.00th=[ 124], 00:19:09.519 | 30.00th=[ 194], 40.00th=[ 330], 50.00th=[ 347], 60.00th=[ 2500], 00:19:09.519 | 70.00th=[ 4597], 80.00th=[ 6745], 90.00th=[ 8926], 95.00th=[ 8926], 00:19:09.519 | 99.00th=[10000], 99.50th=[10000], 99.90th=[10000], 99.95th=[10000], 00:19:09.519 | 99.99th=[10000] 00:19:09.519 lat (msec) : 50=2.17%, 100=15.22%, 250=15.22%, 500=19.57%, >=2000=47.83% 00:19:09.519 cpu : usr=0.01%, sys=0.29%, ctx=127, majf=0, minf=11777 00:19:09.519 IO depths : 1=2.2%, 2=4.3%, 4=8.7%, 8=17.4%, 16=34.8%, 32=32.6%, >=64=0.0% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.519 issued rwts: total=46,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582970: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=8, BW=9104KiB/s (9323kB/s)(97.0MiB/10910msec) 00:19:09.519 slat (usec): min=369, max=2051.9k, avg=111478.40, stdev=449271.13 00:19:09.519 clat (msec): min=95, max=10908, avg=7715.55, stdev=3444.23 00:19:09.519 lat (msec): min=2120, max=10909, avg=7827.02, stdev=3369.22 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 96], 5.00th=[ 2165], 10.00th=[ 2232], 20.00th=[ 4329], 00:19:09.519 | 30.00th=[ 4396], 40.00th=[ 6477], 50.00th=[ 8658], 60.00th=[10671], 00:19:09.519 | 70.00th=[10805], 80.00th=[10805], 90.00th=[10939], 95.00th=[10939], 00:19:09.519 | 99.00th=[10939], 99.50th=[10939], 99.90th=[10939], 99.95th=[10939], 00:19:09.519 | 99.99th=[10939] 00:19:09.519 lat (msec) : 100=1.03%, >=2000=98.97% 00:19:09.519 cpu : usr=0.00%, sys=0.58%, ctx=92, majf=0, minf=24833 00:19:09.519 IO depths : 1=1.0%, 2=2.1%, 4=4.1%, 8=8.2%, 16=16.5%, 32=33.0%, >=64=35.1% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=100.0% 00:19:09.519 issued rwts: total=97,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582971: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=3, BW=3543KiB/s (3628kB/s)(41.0MiB/11849msec) 00:19:09.519 slat (usec): min=560, max=2090.4k, avg=244187.49, stdev=588507.12 00:19:09.519 clat (msec): min=1836, max=11750, avg=3881.94, stdev=2982.00 00:19:09.519 lat (msec): min=1852, max=11848, avg=4126.13, stdev=3211.43 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 1838], 5.00th=[ 1854], 10.00th=[ 1871], 20.00th=[ 1921], 00:19:09.519 | 30.00th=[ 1972], 40.00th=[ 1989], 50.00th=[ 2056], 60.00th=[ 2165], 00:19:09.519 | 70.00th=[ 4245], 80.00th=[ 7416], 90.00th=[ 9597], 95.00th=[ 9597], 00:19:09.519 | 99.00th=[11745], 99.50th=[11745], 99.90th=[11745], 99.95th=[11745], 00:19:09.519 | 99.99th=[11745] 00:19:09.519 lat (msec) : 2000=41.46%, >=2000=58.54% 00:19:09.519 cpu : usr=0.00%, sys=0.25%, ctx=132, majf=0, minf=10497 00:19:09.519 IO depths : 1=2.4%, 2=4.9%, 4=9.8%, 8=19.5%, 16=39.0%, 32=24.4%, >=64=0.0% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.519 issued rwts: total=41,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582972: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=64, BW=64.1MiB/s (67.2MB/s)(695MiB/10840msec) 00:19:09.519 slat (usec): min=699, max=2043.3k, avg=15459.57, stdev=138303.17 00:19:09.519 clat (msec): min=92, max=7023, avg=1736.87, stdev=2338.83 00:19:09.519 lat (msec): min=506, max=7029, avg=1752.33, stdev=2344.52 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 506], 5.00th=[ 510], 10.00th=[ 510], 20.00th=[ 514], 00:19:09.519 | 30.00th=[ 518], 40.00th=[ 518], 50.00th=[ 527], 60.00th=[ 701], 00:19:09.519 | 70.00th=[ 802], 80.00th=[ 1821], 90.00th=[ 6745], 95.00th=[ 6879], 00:19:09.519 | 99.00th=[ 7013], 99.50th=[ 7013], 99.90th=[ 7013], 99.95th=[ 7013], 00:19:09.519 | 99.99th=[ 7013] 00:19:09.519 bw ( KiB/s): min= 4096, max=253952, per=5.12%, avg=129024.00, stdev=110421.20, samples=9 00:19:09.519 iops : min= 4, max= 248, avg=126.00, stdev=107.83, samples=9 00:19:09.519 lat (msec) : 100=0.14%, 500=0.14%, 750=62.88%, 1000=15.83%, 2000=1.29% 00:19:09.519 lat (msec) : >=2000=19.71% 00:19:09.519 cpu : usr=0.00%, sys=1.03%, ctx=1382, majf=0, minf=32769 00:19:09.519 IO depths : 1=0.1%, 2=0.3%, 4=0.6%, 8=1.2%, 16=2.3%, 32=4.6%, >=64=90.9% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.519 issued rwts: total=695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582973: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=16, BW=16.7MiB/s (17.5MB/s)(199MiB/11946msec) 00:19:09.519 slat (usec): min=357, max=2077.4k, avg=59637.67, stdev=321441.27 00:19:09.519 clat (msec): min=77, max=9311, avg=2913.70, stdev=2042.15 00:19:09.519 lat (msec): min=749, max=9316, avg=2973.34, stdev=2088.98 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 743], 5.00th=[ 768], 10.00th=[ 776], 20.00th=[ 2123], 00:19:09.519 | 30.00th=[ 2333], 40.00th=[ 2467], 50.00th=[ 2567], 60.00th=[ 2735], 00:19:09.519 | 70.00th=[ 2836], 80.00th=[ 2937], 90.00th=[ 5336], 95.00th=[ 9194], 00:19:09.519 | 99.00th=[ 9329], 99.50th=[ 9329], 99.90th=[ 9329], 99.95th=[ 9329], 00:19:09.519 | 99.99th=[ 9329] 00:19:09.519 bw ( KiB/s): min=65536, max=79872, per=2.88%, avg=72704.00, stdev=10137.08, samples=2 00:19:09.519 iops : min= 64, max= 78, avg=71.00, stdev= 9.90, samples=2 00:19:09.519 lat (msec) : 100=0.50%, 750=2.51%, 1000=16.58%, >=2000=80.40% 00:19:09.519 cpu : usr=0.01%, sys=0.69%, ctx=355, majf=0, minf=32769 00:19:09.519 IO depths : 1=0.5%, 2=1.0%, 4=2.0%, 8=4.0%, 16=8.0%, 32=16.1%, >=64=68.3% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=1.4% 00:19:09.519 issued rwts: total=199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.519 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.519 job5: (groupid=0, jobs=1): err= 0: pid=2582974: Thu Jul 25 10:07:53 2024 00:19:09.519 read: IOPS=4, BW=4124KiB/s (4223kB/s)(48.0MiB/11918msec) 00:19:09.519 slat (usec): min=714, max=3106.1k, avg=208570.48, stdev=663076.10 00:19:09.519 clat (msec): min=1906, max=11916, avg=8293.38, stdev=4336.74 00:19:09.519 lat (msec): min=1935, max=11917, avg=8501.95, stdev=4263.14 00:19:09.519 clat percentiles (msec): 00:19:09.519 | 1.00th=[ 1905], 5.00th=[ 1955], 10.00th=[ 1989], 20.00th=[ 2072], 00:19:09.519 | 30.00th=[ 5336], 40.00th=[ 7483], 50.00th=[11745], 60.00th=[11879], 00:19:09.519 | 70.00th=[11879], 80.00th=[11879], 90.00th=[11879], 95.00th=[11879], 00:19:09.519 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:09.519 | 99.99th=[11879] 00:19:09.519 lat (msec) : 2000=10.42%, >=2000=89.58% 00:19:09.519 cpu : usr=0.00%, sys=0.33%, ctx=144, majf=0, minf=12289 00:19:09.519 IO depths : 1=2.1%, 2=4.2%, 4=8.3%, 8=16.7%, 16=33.3%, 32=35.4%, >=64=0.0% 00:19:09.519 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.519 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.520 issued rwts: total=48,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 job5: (groupid=0, jobs=1): err= 0: pid=2582975: Thu Jul 25 10:07:53 2024 00:19:09.520 read: IOPS=5, BW=5961KiB/s (6104kB/s)(63.0MiB/10822msec) 00:19:09.520 slat (usec): min=596, max=2052.5k, avg=170335.27, stdev=552255.14 00:19:09.520 clat (msec): min=89, max=10819, avg=6853.47, stdev=3395.56 00:19:09.520 lat (msec): min=2116, max=10821, avg=7023.80, stdev=3319.10 00:19:09.520 clat percentiles (msec): 00:19:09.520 | 1.00th=[ 90], 5.00th=[ 2140], 10.00th=[ 2198], 20.00th=[ 2232], 00:19:09.520 | 30.00th=[ 4329], 40.00th=[ 6409], 50.00th=[ 6477], 60.00th=[ 8658], 00:19:09.520 | 70.00th=[10671], 80.00th=[10805], 90.00th=[10805], 95.00th=[10805], 00:19:09.520 | 99.00th=[10805], 99.50th=[10805], 99.90th=[10805], 99.95th=[10805], 00:19:09.520 | 99.99th=[10805] 00:19:09.520 lat (msec) : 100=1.59%, >=2000=98.41% 00:19:09.520 cpu : usr=0.00%, sys=0.44%, ctx=72, majf=0, minf=16129 00:19:09.520 IO depths : 1=1.6%, 2=3.2%, 4=6.3%, 8=12.7%, 16=25.4%, 32=50.8%, >=64=0.0% 00:19:09.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.520 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.520 issued rwts: total=63,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 job5: (groupid=0, jobs=1): err= 0: pid=2582976: Thu Jul 25 10:07:53 2024 00:19:09.520 read: IOPS=3, BW=3189KiB/s (3266kB/s)(37.0MiB/11880msec) 00:19:09.520 slat (usec): min=1748, max=2065.2k, avg=270613.00, stdev=655915.05 00:19:09.520 clat (msec): min=1866, max=11863, avg=5175.88, stdev=3514.52 00:19:09.520 lat (msec): min=1883, max=11879, avg=5446.49, stdev=3636.03 00:19:09.520 clat percentiles (msec): 00:19:09.520 | 1.00th=[ 1871], 5.00th=[ 1888], 10.00th=[ 1921], 20.00th=[ 1989], 00:19:09.520 | 30.00th=[ 2056], 40.00th=[ 2140], 50.00th=[ 4279], 60.00th=[ 6342], 00:19:09.520 | 70.00th=[ 6409], 80.00th=[ 7550], 90.00th=[11745], 95.00th=[11879], 00:19:09.520 | 99.00th=[11879], 99.50th=[11879], 99.90th=[11879], 99.95th=[11879], 00:19:09.520 | 99.99th=[11879] 00:19:09.520 lat (msec) : 2000=21.62%, >=2000=78.38% 00:19:09.520 cpu : usr=0.00%, sys=0.26%, ctx=129, majf=0, minf=9473 00:19:09.520 IO depths : 1=2.7%, 2=5.4%, 4=10.8%, 8=21.6%, 16=43.2%, 32=16.2%, >=64=0.0% 00:19:09.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.520 complete : 0=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=100.0%, >=64=0.0% 00:19:09.520 issued rwts: total=37,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 job5: (groupid=0, jobs=1): err= 0: pid=2582977: Thu Jul 25 10:07:53 2024 00:19:09.520 read: IOPS=66, BW=67.0MiB/s (70.2MB/s)(731MiB/10912msec) 00:19:09.520 slat (usec): min=472, max=2102.9k, avg=14792.63, stdev=136094.50 00:19:09.520 clat (msec): min=95, max=6939, avg=1707.99, stdev=2232.56 00:19:09.520 lat (msec): min=498, max=6941, avg=1722.78, stdev=2238.13 00:19:09.520 clat percentiles (msec): 00:19:09.520 | 1.00th=[ 498], 5.00th=[ 510], 10.00th=[ 514], 20.00th=[ 518], 00:19:09.520 | 30.00th=[ 523], 40.00th=[ 523], 50.00th=[ 531], 60.00th=[ 743], 00:19:09.520 | 70.00th=[ 810], 80.00th=[ 1838], 90.00th=[ 6678], 95.00th=[ 6812], 00:19:09.520 | 99.00th=[ 6879], 99.50th=[ 6946], 99.90th=[ 6946], 99.95th=[ 6946], 00:19:09.520 | 99.99th=[ 6946] 00:19:09.520 bw ( KiB/s): min=14336, max=253952, per=6.12%, avg=154311.75, stdev=103103.38, samples=8 00:19:09.520 iops : min= 14, max= 248, avg=150.50, stdev=100.83, samples=8 00:19:09.520 lat (msec) : 100=0.14%, 500=1.09%, 750=58.96%, 1000=15.87%, 2000=4.51% 00:19:09.520 lat (msec) : >=2000=19.43% 00:19:09.520 cpu : usr=0.01%, sys=1.14%, ctx=1403, majf=0, minf=32769 00:19:09.520 IO depths : 1=0.1%, 2=0.3%, 4=0.5%, 8=1.1%, 16=2.2%, 32=4.4%, >=64=91.4% 00:19:09.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.520 complete : 0=0.0%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.2% 00:19:09.520 issued rwts: total=731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 job5: (groupid=0, jobs=1): err= 0: pid=2582978: Thu Jul 25 10:07:53 2024 00:19:09.520 read: IOPS=160, BW=161MiB/s (169MB/s)(2251MiB/13983msec) 00:19:09.520 slat (usec): min=51, max=2114.6k, avg=5253.14, stdev=62476.43 00:19:09.520 clat (msec): min=241, max=6635, avg=772.48, stdev=1370.40 00:19:09.520 lat (msec): min=242, max=6635, avg=777.74, stdev=1375.02 00:19:09.520 clat percentiles (msec): 00:19:09.520 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 264], 00:19:09.520 | 30.00th=[ 268], 40.00th=[ 397], 50.00th=[ 426], 60.00th=[ 502], 00:19:09.520 | 70.00th=[ 542], 80.00th=[ 584], 90.00th=[ 852], 95.00th=[ 4329], 00:19:09.520 | 99.00th=[ 6611], 99.50th=[ 6611], 99.90th=[ 6611], 99.95th=[ 6611], 00:19:09.520 | 99.99th=[ 6611] 00:19:09.520 bw ( KiB/s): min= 2052, max=519153, per=10.14%, avg=255716.41, stdev=160819.42, samples=17 00:19:09.520 iops : min= 2, max= 506, avg=249.59, stdev=156.98, samples=17 00:19:09.520 lat (msec) : 250=9.02%, 500=51.40%, 750=26.52%, 1000=6.71%, >=2000=6.35% 00:19:09.520 cpu : usr=0.07%, sys=1.64%, ctx=2071, majf=0, minf=32769 00:19:09.520 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.4%, >=64=97.2% 00:19:09.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:09.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:09.520 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:09.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:09.520 00:19:09.520 Run status group 0 (all jobs): 00:19:09.520 READ: bw=2462MiB/s (2582MB/s), 1030KiB/s-240MiB/s (1054kB/s-252MB/s), io=33.9GiB (36.4GB), run=10026-14109msec 00:19:09.520 00:19:09.520 Disk stats (read/write): 00:19:09.520 nvme0n1: ios=38521/0, merge=0/0, ticks=9864851/0, in_queue=9864851, util=98.99% 00:19:09.520 nvme1n1: ios=59525/0, merge=0/0, ticks=11733474/0, in_queue=11733474, util=99.06% 00:19:09.520 nvme2n1: ios=45906/0, merge=0/0, ticks=9849413/0, in_queue=9849413, util=99.22% 00:19:09.520 nvme3n1: ios=69992/0, merge=0/0, ticks=10233242/0, in_queue=10233242, util=99.17% 00:19:09.520 nvme4n1: ios=8882/0, merge=0/0, ticks=6984717/0, in_queue=6984717, util=99.33% 00:19:09.520 nvme5n1: ios=54535/0, merge=0/0, ticks=10523245/0, in_queue=10523245, util=99.27% 00:19:09.520 10:07:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@38 -- # sync 00:19:09.520 10:07:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # seq 0 5 00:19:09.520 10:07:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:09.520 10:07:54 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode0 00:19:10.455 NQN:nqn.2016-06.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000000 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000000 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000000 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:10.455 10:07:55 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:11.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000001 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000001 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000001 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:11.390 10:07:56 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:12.323 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:12.323 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000002 00:19:12.323 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000002 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000002 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:12.324 10:07:57 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:13.258 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000003 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000003 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000003 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:13.258 10:07:58 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:14.193 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:14.193 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000004 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000004 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000004 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@40 -- # for i in $(seq 0 5) 00:19:14.194 10:07:59 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@41 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:15.129 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@42 -- # waitforserial_disconnect SPDK00000000000005 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1219 -- # local i=0 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1220 -- # grep -q -w SPDK00000000000005 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1227 -- # grep -q -w SPDK00000000000005 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1231 -- # return 0 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- target/srq_overwhelm.sh@48 -- # nvmftestfini 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@117 -- # sync 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@120 -- # set +e 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:15.129 rmmod nvme_rdma 00:19:15.129 rmmod nvme_fabrics 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@124 -- # set -e 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@125 -- # return 0 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@489 -- # '[' -n 2581473 ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@490 -- # killprocess 2581473 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@950 -- # '[' -z 2581473 ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@954 -- # kill -0 2581473 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # uname 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2581473 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2581473' 00:19:15.129 killing process with pid 2581473 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@969 -- # kill 2581473 00:19:15.129 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@974 -- # wait 2581473 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:15.699 00:19:15.699 real 0m34.370s 00:19:15.699 user 2m0.609s 00:19:15.699 sys 0m13.073s 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_srq_overwhelm -- common/autotest_common.sh@10 -- # set +x 00:19:15.699 ************************************ 00:19:15.699 END TEST nvmf_srq_overwhelm 00:19:15.699 ************************************ 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:15.699 ************************************ 00:19:15.699 START TEST nvmf_shutdown 00:19:15.699 ************************************ 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:19:15.699 * Looking for test storage... 00:19:15.699 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.699 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:15.700 ************************************ 00:19:15.700 START TEST nvmf_shutdown_tc1 00:19:15.700 ************************************ 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:15.700 10:08:00 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.268 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:22.269 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:22.269 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:22.269 Found net devices under 0000:da:00.0: mlx_0_0 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:22.269 Found net devices under 0000:da:00.1: mlx_0_1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # uname 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:22.269 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:22.269 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:22.269 altname enp218s0f0np0 00:19:22.269 altname ens818f0np0 00:19:22.269 inet 192.168.100.8/24 scope global mlx_0_0 00:19:22.269 valid_lft forever preferred_lft forever 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:22.269 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:22.270 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:22.270 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:22.270 altname enp218s0f1np1 00:19:22.270 altname ens818f1np1 00:19:22.270 inet 192.168.100.9/24 scope global mlx_0_1 00:19:22.270 valid_lft forever preferred_lft forever 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # continue 2 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:22.270 192.168.100.9' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:22.270 192.168.100.9' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # head -n 1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:22.270 192.168.100.9' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # tail -n +2 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # head -n 1 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2589469 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2589469 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2589469 ']' 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.270 10:08:06 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.270 [2024-07-25 10:08:06.592843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:22.270 [2024-07-25 10:08:06.592902] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.270 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.270 [2024-07-25 10:08:06.660075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.270 [2024-07-25 10:08:06.740369] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.270 [2024-07-25 10:08:06.740405] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.270 [2024-07-25 10:08:06.740412] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.270 [2024-07-25 10:08:06.740418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.270 [2024-07-25 10:08:06.740423] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.270 [2024-07-25 10:08:06.740537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.270 [2024-07-25 10:08:06.740641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.270 [2024-07-25 10:08:06.740996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.270 [2024-07-25 10:08:06.740996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:22.270 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:22.270 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:22.270 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.270 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:22.270 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.528 [2024-07-25 10:08:07.467280] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdeae10/0xdef300) succeed. 00:19:22.528 [2024-07-25 10:08:07.476820] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdec400/0xe30990) succeed. 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.528 10:08:07 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:22.528 Malloc1 00:19:22.528 [2024-07-25 10:08:07.686835] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:22.785 Malloc2 00:19:22.785 Malloc3 00:19:22.785 Malloc4 00:19:22.785 Malloc5 00:19:22.785 Malloc6 00:19:22.785 Malloc7 00:19:23.043 Malloc8 00:19:23.043 Malloc9 00:19:23.043 Malloc10 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2589757 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2589757 /var/tmp/bdevperf.sock 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2589757 ']' 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:23.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.043 { 00:19:23.043 "params": { 00:19:23.043 "name": "Nvme$subsystem", 00:19:23.043 "trtype": "$TEST_TRANSPORT", 00:19:23.043 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.043 "adrfam": "ipv4", 00:19:23.043 "trsvcid": "$NVMF_PORT", 00:19:23.043 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.043 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.043 "hdgst": ${hdgst:-false}, 00:19:23.043 "ddgst": ${ddgst:-false} 00:19:23.043 }, 00:19:23.043 "method": "bdev_nvme_attach_controller" 00:19:23.043 } 00:19:23.043 EOF 00:19:23.043 )") 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.043 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 [2024-07-25 10:08:08.159580] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:23.044 [2024-07-25 10:08:08.159625] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:23.044 { 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme$subsystem", 00:19:23.044 "trtype": "$TEST_TRANSPORT", 00:19:23.044 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "$NVMF_PORT", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:23.044 "hdgst": ${hdgst:-false}, 00:19:23.044 "ddgst": ${ddgst:-false} 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 } 00:19:23.044 EOF 00:19:23.044 )") 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:23.044 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:23.044 10:08:08 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme1", 00:19:23.044 "trtype": "rdma", 00:19:23.044 "traddr": "192.168.100.8", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "4420", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:23.044 "hdgst": false, 00:19:23.044 "ddgst": false 00:19:23.044 }, 00:19:23.044 "method": "bdev_nvme_attach_controller" 00:19:23.044 },{ 00:19:23.044 "params": { 00:19:23.044 "name": "Nvme2", 00:19:23.044 "trtype": "rdma", 00:19:23.044 "traddr": "192.168.100.8", 00:19:23.044 "adrfam": "ipv4", 00:19:23.044 "trsvcid": "4420", 00:19:23.044 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:23.044 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme3", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme4", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme5", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme6", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme7", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme8", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme9", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 },{ 00:19:23.045 "params": { 00:19:23.045 "name": "Nvme10", 00:19:23.045 "trtype": "rdma", 00:19:23.045 "traddr": "192.168.100.8", 00:19:23.045 "adrfam": "ipv4", 00:19:23.045 "trsvcid": "4420", 00:19:23.045 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:23.045 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:23.045 "hdgst": false, 00:19:23.045 "ddgst": false 00:19:23.045 }, 00:19:23.045 "method": "bdev_nvme_attach_controller" 00:19:23.045 }' 00:19:23.302 [2024-07-25 10:08:08.226836] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.302 [2024-07-25 10:08:08.298898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2589757 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:24.253 10:08:09 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:25.205 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2589757 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2589469 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.205 } 00:19:25.205 EOF 00:19:25.205 )") 00:19:25.205 [2024-07-25 10:08:10.210215] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:25.205 [2024-07-25 10:08:10.210262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590024 ] 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.205 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.205 { 00:19:25.205 "params": { 00:19:25.205 "name": "Nvme$subsystem", 00:19:25.205 "trtype": "$TEST_TRANSPORT", 00:19:25.205 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.205 "adrfam": "ipv4", 00:19:25.205 "trsvcid": "$NVMF_PORT", 00:19:25.205 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.205 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.205 "hdgst": ${hdgst:-false}, 00:19:25.205 "ddgst": ${ddgst:-false} 00:19:25.205 }, 00:19:25.205 "method": "bdev_nvme_attach_controller" 00:19:25.206 } 00:19:25.206 EOF 00:19:25.206 )") 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.206 { 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme$subsystem", 00:19:25.206 "trtype": "$TEST_TRANSPORT", 00:19:25.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "$NVMF_PORT", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.206 "hdgst": ${hdgst:-false}, 00:19:25.206 "ddgst": ${ddgst:-false} 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 } 00:19:25.206 EOF 00:19:25.206 )") 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:25.206 { 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme$subsystem", 00:19:25.206 "trtype": "$TEST_TRANSPORT", 00:19:25.206 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "$NVMF_PORT", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:25.206 "hdgst": ${hdgst:-false}, 00:19:25.206 "ddgst": ${ddgst:-false} 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 } 00:19:25.206 EOF 00:19:25.206 )") 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:25.206 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:25.206 10:08:10 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme1", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme2", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme3", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme4", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme5", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme6", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme7", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme8", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme9", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 },{ 00:19:25.206 "params": { 00:19:25.206 "name": "Nvme10", 00:19:25.206 "trtype": "rdma", 00:19:25.206 "traddr": "192.168.100.8", 00:19:25.206 "adrfam": "ipv4", 00:19:25.206 "trsvcid": "4420", 00:19:25.206 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:25.206 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:25.206 "hdgst": false, 00:19:25.206 "ddgst": false 00:19:25.206 }, 00:19:25.206 "method": "bdev_nvme_attach_controller" 00:19:25.206 }' 00:19:25.206 [2024-07-25 10:08:10.281158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:25.206 [2024-07-25 10:08:10.355190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:26.140 Running I/O for 1 seconds... 00:19:27.513 00:19:27.513 Latency(us) 00:19:27.513 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.513 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme1n1 : 1.16 344.11 21.51 0.00 0.00 181697.62 22719.15 212711.13 00:19:27.513 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme2n1 : 1.16 356.67 22.29 0.00 0.00 173669.42 22843.98 204721.98 00:19:27.513 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme3n1 : 1.16 389.78 24.36 0.00 0.00 156780.75 3869.74 146800.64 00:19:27.513 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme4n1 : 1.17 385.06 24.07 0.00 0.00 156898.89 3510.86 137812.85 00:19:27.513 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme5n1 : 1.17 383.73 23.98 0.00 0.00 155600.28 15416.56 126827.76 00:19:27.513 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.513 Nvme6n1 : 1.17 383.26 23.95 0.00 0.00 153521.42 19972.88 115842.68 00:19:27.513 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.513 Verification LBA range: start 0x0 length 0x400 00:19:27.514 Nvme7n1 : 1.17 382.89 23.93 0.00 0.00 150571.47 25215.76 109351.50 00:19:27.514 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.514 Verification LBA range: start 0x0 length 0x400 00:19:27.514 Nvme8n1 : 1.17 382.43 23.90 0.00 0.00 149506.65 22968.81 98366.42 00:19:27.514 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.514 Verification LBA range: start 0x0 length 0x400 00:19:27.514 Nvme9n1 : 1.17 381.60 23.85 0.00 0.00 148469.34 1833.45 108352.85 00:19:27.514 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:27.514 Verification LBA range: start 0x0 length 0x400 00:19:27.514 Nvme10n1 : 1.18 378.80 23.67 0.00 0.00 147473.62 8238.81 160781.65 00:19:27.514 =================================================================================================================== 00:19:27.514 Total : 3768.32 235.52 0.00 0.00 157031.75 1833.45 212711.13 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:27.772 rmmod nvme_rdma 00:19:27.772 rmmod nvme_fabrics 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2589469 ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2589469 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2589469 ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2589469 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2589469 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2589469' 00:19:27.772 killing process with pid 2589469 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2589469 00:19:27.772 10:08:12 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2589469 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:28.353 00:19:28.353 real 0m12.387s 00:19:28.353 user 0m30.408s 00:19:28.353 sys 0m5.302s 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:28.353 ************************************ 00:19:28.353 END TEST nvmf_shutdown_tc1 00:19:28.353 ************************************ 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:28.353 ************************************ 00:19:28.353 START TEST nvmf_shutdown_tc2 00:19:28.353 ************************************ 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:28.353 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:28.353 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:28.353 Found net devices under 0000:da:00.0: mlx_0_0 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:28.353 Found net devices under 0000:da:00.1: mlx_0_1 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # uname 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:28.353 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:28.354 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.354 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:28.354 altname enp218s0f0np0 00:19:28.354 altname ens818f0np0 00:19:28.354 inet 192.168.100.8/24 scope global mlx_0_0 00:19:28.354 valid_lft forever preferred_lft forever 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:28.354 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:28.354 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:28.354 altname enp218s0f1np1 00:19:28.354 altname ens818f1np1 00:19:28.354 inet 192.168.100.9/24 scope global mlx_0_1 00:19:28.354 valid_lft forever preferred_lft forever 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # continue 2 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:28.354 192.168.100.9' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:28.354 192.168.100.9' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # head -n 1 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:28.354 192.168.100.9' 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # tail -n +2 00:19:28.354 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # head -n 1 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2590806 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2590806 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2590806 ']' 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.623 10:08:13 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:28.623 [2024-07-25 10:08:13.589065] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:28.623 [2024-07-25 10:08:13.589116] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.623 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.623 [2024-07-25 10:08:13.658339] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:28.623 [2024-07-25 10:08:13.738706] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.623 [2024-07-25 10:08:13.738757] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.623 [2024-07-25 10:08:13.738763] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.623 [2024-07-25 10:08:13.738769] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.623 [2024-07-25 10:08:13.738774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.623 [2024-07-25 10:08:13.738885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.623 [2024-07-25 10:08:13.739010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:28.623 [2024-07-25 10:08:13.739115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.623 [2024-07-25 10:08:13.739116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 [2024-07-25 10:08:14.461461] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a0e10/0x21a5300) succeed. 00:19:29.555 [2024-07-25 10:08:14.470489] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a2400/0x21e6990) succeed. 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.555 10:08:14 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:29.555 Malloc1 00:19:29.555 [2024-07-25 10:08:14.677496] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:29.555 Malloc2 00:19:29.821 Malloc3 00:19:29.821 Malloc4 00:19:29.821 Malloc5 00:19:29.821 Malloc6 00:19:29.821 Malloc7 00:19:29.821 Malloc8 00:19:30.084 Malloc9 00:19:30.084 Malloc10 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2591090 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2591090 /var/tmp/bdevperf.sock 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2591090 ']' 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:30.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.084 { 00:19:30.084 "params": { 00:19:30.084 "name": "Nvme$subsystem", 00:19:30.084 "trtype": "$TEST_TRANSPORT", 00:19:30.084 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.084 "adrfam": "ipv4", 00:19:30.084 "trsvcid": "$NVMF_PORT", 00:19:30.084 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.084 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.084 "hdgst": ${hdgst:-false}, 00:19:30.084 "ddgst": ${ddgst:-false} 00:19:30.084 }, 00:19:30.084 "method": "bdev_nvme_attach_controller" 00:19:30.084 } 00:19:30.084 EOF 00:19:30.084 )") 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.084 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.084 { 00:19:30.084 "params": { 00:19:30.084 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 [2024-07-25 10:08:15.149919] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:30.085 [2024-07-25 10:08:15.149965] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591090 ] 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.085 { 00:19:30.085 "params": { 00:19:30.085 "name": "Nvme$subsystem", 00:19:30.085 "trtype": "$TEST_TRANSPORT", 00:19:30.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.085 "adrfam": "ipv4", 00:19:30.085 "trsvcid": "$NVMF_PORT", 00:19:30.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.085 "hdgst": ${hdgst:-false}, 00:19:30.085 "ddgst": ${ddgst:-false} 00:19:30.085 }, 00:19:30.085 "method": "bdev_nvme_attach_controller" 00:19:30.085 } 00:19:30.085 EOF 00:19:30.085 )") 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.085 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.086 { 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme$subsystem", 00:19:30.086 "trtype": "$TEST_TRANSPORT", 00:19:30.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "$NVMF_PORT", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.086 "hdgst": ${hdgst:-false}, 00:19:30.086 "ddgst": ${ddgst:-false} 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 } 00:19:30.086 EOF 00:19:30.086 )") 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:30.086 { 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme$subsystem", 00:19:30.086 "trtype": "$TEST_TRANSPORT", 00:19:30.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "$NVMF_PORT", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:30.086 "hdgst": ${hdgst:-false}, 00:19:30.086 "ddgst": ${ddgst:-false} 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 } 00:19:30.086 EOF 00:19:30.086 )") 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:30.086 EAL: No free 2048 kB hugepages reported on node 1 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:30.086 10:08:15 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme1", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme2", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme3", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme4", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme5", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme6", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme7", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.086 "params": { 00:19:30.086 "name": "Nvme8", 00:19:30.086 "trtype": "rdma", 00:19:30.086 "traddr": "192.168.100.8", 00:19:30.086 "adrfam": "ipv4", 00:19:30.086 "trsvcid": "4420", 00:19:30.086 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:30.086 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:30.086 "hdgst": false, 00:19:30.086 "ddgst": false 00:19:30.086 }, 00:19:30.086 "method": "bdev_nvme_attach_controller" 00:19:30.086 },{ 00:19:30.087 "params": { 00:19:30.087 "name": "Nvme9", 00:19:30.087 "trtype": "rdma", 00:19:30.087 "traddr": "192.168.100.8", 00:19:30.087 "adrfam": "ipv4", 00:19:30.087 "trsvcid": "4420", 00:19:30.087 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:30.087 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:30.087 "hdgst": false, 00:19:30.087 "ddgst": false 00:19:30.087 }, 00:19:30.087 "method": "bdev_nvme_attach_controller" 00:19:30.087 },{ 00:19:30.087 "params": { 00:19:30.087 "name": "Nvme10", 00:19:30.087 "trtype": "rdma", 00:19:30.087 "traddr": "192.168.100.8", 00:19:30.087 "adrfam": "ipv4", 00:19:30.087 "trsvcid": "4420", 00:19:30.087 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:30.087 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:30.087 "hdgst": false, 00:19:30.087 "ddgst": false 00:19:30.087 }, 00:19:30.087 "method": "bdev_nvme_attach_controller" 00:19:30.087 }' 00:19:30.087 [2024-07-25 10:08:15.216626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:30.345 [2024-07-25 10:08:15.289268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:31.277 Running I/O for 10 seconds... 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.278 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:31.536 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.536 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:31.536 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:31.536 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=147 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2591090 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2591090 ']' 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2591090 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2591090 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2591090' 00:19:31.794 killing process with pid 2591090 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2591090 00:19:31.794 10:08:16 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2591090 00:19:32.053 Received shutdown signal, test time was about 0.827179 seconds 00:19:32.053 00:19:32.053 Latency(us) 00:19:32.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.053 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme1n1 : 0.81 334.92 20.93 0.00 0.00 187444.55 7365.00 217704.35 00:19:32.053 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme2n1 : 0.81 352.89 22.06 0.00 0.00 174661.16 7552.24 208716.56 00:19:32.053 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme3n1 : 0.81 392.90 24.56 0.00 0.00 153728.44 6272.73 147799.28 00:19:32.053 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme4n1 : 0.82 392.34 24.52 0.00 0.00 150835.40 8113.98 139810.13 00:19:32.053 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme5n1 : 0.82 391.66 24.48 0.00 0.00 148404.32 8675.72 128825.05 00:19:32.053 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme6n1 : 0.82 391.10 24.44 0.00 0.00 145066.96 9050.21 121335.22 00:19:32.053 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme7n1 : 0.82 390.47 24.40 0.00 0.00 142392.76 9424.70 112347.43 00:19:32.053 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme8n1 : 0.82 389.80 24.36 0.00 0.00 139778.54 10048.85 104857.60 00:19:32.053 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme9n1 : 0.82 388.99 24.31 0.00 0.00 137521.25 10922.67 93373.20 00:19:32.053 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:32.053 Verification LBA range: start 0x0 length 0x400 00:19:32.053 Nvme10n1 : 0.83 309.73 19.36 0.00 0.00 168447.09 2933.52 221698.93 00:19:32.053 =================================================================================================================== 00:19:32.053 Total : 3734.81 233.43 0.00 0.00 153816.03 2933.52 221698.93 00:19:32.310 10:08:17 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2590806 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:33.244 rmmod nvme_rdma 00:19:33.244 rmmod nvme_fabrics 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2590806 ']' 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2590806 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2590806 ']' 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2590806 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2590806 00:19:33.244 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:33.245 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:33.245 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2590806' 00:19:33.245 killing process with pid 2590806 00:19:33.245 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2590806 00:19:33.245 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2590806 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:33.813 00:19:33.813 real 0m5.545s 00:19:33.813 user 0m22.394s 00:19:33.813 sys 0m1.042s 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 ************************************ 00:19:33.813 END TEST nvmf_shutdown_tc2 00:19:33.813 ************************************ 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 ************************************ 00:19:33.813 START TEST nvmf_shutdown_tc3 00:19:33.813 ************************************ 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:33.813 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:33.813 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:33.813 Found net devices under 0000:da:00.0: mlx_0_0 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:33.813 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:33.814 Found net devices under 0000:da:00.1: mlx_0_1 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@420 -- # rdma_device_init 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # uname 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:33.814 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:34.073 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:34.073 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:34.073 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:34.073 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:34.073 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:34.074 10:08:18 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:34.074 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.074 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:34.074 altname enp218s0f0np0 00:19:34.074 altname ens818f0np0 00:19:34.074 inet 192.168.100.8/24 scope global mlx_0_0 00:19:34.074 valid_lft forever preferred_lft forever 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:34.074 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:34.074 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:34.074 altname enp218s0f1np1 00:19:34.074 altname ens818f1np1 00:19:34.074 inet 192.168.100.9/24 scope global mlx_0_1 00:19:34.074 valid_lft forever preferred_lft forever 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # continue 2 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:34.074 192.168.100.9' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:34.074 192.168.100.9' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # head -n 1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:34.074 192.168.100.9' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # tail -n +2 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # head -n 1 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2591887 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2591887 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2591887 ']' 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:34.074 10:08:19 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:34.074 [2024-07-25 10:08:19.211118] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:34.074 [2024-07-25 10:08:19.211174] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.332 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.332 [2024-07-25 10:08:19.281244] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:34.332 [2024-07-25 10:08:19.354433] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.332 [2024-07-25 10:08:19.354488] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.332 [2024-07-25 10:08:19.354495] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.332 [2024-07-25 10:08:19.354501] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.332 [2024-07-25 10:08:19.354505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.332 [2024-07-25 10:08:19.354636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.332 [2024-07-25 10:08:19.354749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:34.332 [2024-07-25 10:08:19.354833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.332 [2024-07-25 10:08:19.354834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:34.897 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:34.897 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:34.897 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:34.897 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:34.897 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.155 [2024-07-25 10:08:20.084747] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fb7e10/0x1fbc300) succeed. 00:19:35.155 [2024-07-25 10:08:20.093845] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fb9400/0x1ffd990) succeed. 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:35.155 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.156 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.156 Malloc1 00:19:35.156 [2024-07-25 10:08:20.302516] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:35.413 Malloc2 00:19:35.413 Malloc3 00:19:35.413 Malloc4 00:19:35.413 Malloc5 00:19:35.413 Malloc6 00:19:35.413 Malloc7 00:19:35.670 Malloc8 00:19:35.670 Malloc9 00:19:35.670 Malloc10 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2592174 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2592174 /var/tmp/bdevperf.sock 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2592174 ']' 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.670 { 00:19:35.670 "params": { 00:19:35.670 "name": "Nvme$subsystem", 00:19:35.670 "trtype": "$TEST_TRANSPORT", 00:19:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.670 "adrfam": "ipv4", 00:19:35.670 "trsvcid": "$NVMF_PORT", 00:19:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.670 "hdgst": ${hdgst:-false}, 00:19:35.670 "ddgst": ${ddgst:-false} 00:19:35.670 }, 00:19:35.670 "method": "bdev_nvme_attach_controller" 00:19:35.670 } 00:19:35.670 EOF 00:19:35.670 )") 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.670 { 00:19:35.670 "params": { 00:19:35.670 "name": "Nvme$subsystem", 00:19:35.670 "trtype": "$TEST_TRANSPORT", 00:19:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.670 "adrfam": "ipv4", 00:19:35.670 "trsvcid": "$NVMF_PORT", 00:19:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.670 "hdgst": ${hdgst:-false}, 00:19:35.670 "ddgst": ${ddgst:-false} 00:19:35.670 }, 00:19:35.670 "method": "bdev_nvme_attach_controller" 00:19:35.670 } 00:19:35.670 EOF 00:19:35.670 )") 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.670 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.670 { 00:19:35.670 "params": { 00:19:35.670 "name": "Nvme$subsystem", 00:19:35.670 "trtype": "$TEST_TRANSPORT", 00:19:35.670 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.670 "adrfam": "ipv4", 00:19:35.670 "trsvcid": "$NVMF_PORT", 00:19:35.670 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.670 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.670 "hdgst": ${hdgst:-false}, 00:19:35.670 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 [2024-07-25 10:08:20.776419] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:35.671 [2024-07-25 10:08:20.776467] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592174 ] 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:35.671 { 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme$subsystem", 00:19:35.671 "trtype": "$TEST_TRANSPORT", 00:19:35.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "$NVMF_PORT", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:35.671 "hdgst": ${hdgst:-false}, 00:19:35.671 "ddgst": ${ddgst:-false} 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 } 00:19:35.671 EOF 00:19:35.671 )") 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:35.671 EAL: No free 2048 kB hugepages reported on node 1 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:35.671 10:08:20 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme1", 00:19:35.671 "trtype": "rdma", 00:19:35.671 "traddr": "192.168.100.8", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "4420", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.671 "hdgst": false, 00:19:35.671 "ddgst": false 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 },{ 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme2", 00:19:35.671 "trtype": "rdma", 00:19:35.671 "traddr": "192.168.100.8", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "4420", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:35.671 "hdgst": false, 00:19:35.671 "ddgst": false 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 },{ 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme3", 00:19:35.671 "trtype": "rdma", 00:19:35.671 "traddr": "192.168.100.8", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "4420", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:35.671 "hdgst": false, 00:19:35.671 "ddgst": false 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 },{ 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme4", 00:19:35.671 "trtype": "rdma", 00:19:35.671 "traddr": "192.168.100.8", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "4420", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:35.671 "hdgst": false, 00:19:35.671 "ddgst": false 00:19:35.671 }, 00:19:35.671 "method": "bdev_nvme_attach_controller" 00:19:35.671 },{ 00:19:35.671 "params": { 00:19:35.671 "name": "Nvme5", 00:19:35.671 "trtype": "rdma", 00:19:35.671 "traddr": "192.168.100.8", 00:19:35.671 "adrfam": "ipv4", 00:19:35.671 "trsvcid": "4420", 00:19:35.671 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:35.671 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:35.671 "hdgst": false, 00:19:35.671 "ddgst": false 00:19:35.671 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 },{ 00:19:35.672 "params": { 00:19:35.672 "name": "Nvme6", 00:19:35.672 "trtype": "rdma", 00:19:35.672 "traddr": "192.168.100.8", 00:19:35.672 "adrfam": "ipv4", 00:19:35.672 "trsvcid": "4420", 00:19:35.672 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:35.672 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:35.672 "hdgst": false, 00:19:35.672 "ddgst": false 00:19:35.672 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 },{ 00:19:35.672 "params": { 00:19:35.672 "name": "Nvme7", 00:19:35.672 "trtype": "rdma", 00:19:35.672 "traddr": "192.168.100.8", 00:19:35.672 "adrfam": "ipv4", 00:19:35.672 "trsvcid": "4420", 00:19:35.672 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:35.672 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:35.672 "hdgst": false, 00:19:35.672 "ddgst": false 00:19:35.672 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 },{ 00:19:35.672 "params": { 00:19:35.672 "name": "Nvme8", 00:19:35.672 "trtype": "rdma", 00:19:35.672 "traddr": "192.168.100.8", 00:19:35.672 "adrfam": "ipv4", 00:19:35.672 "trsvcid": "4420", 00:19:35.672 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:35.672 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:35.672 "hdgst": false, 00:19:35.672 "ddgst": false 00:19:35.672 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 },{ 00:19:35.672 "params": { 00:19:35.672 "name": "Nvme9", 00:19:35.672 "trtype": "rdma", 00:19:35.672 "traddr": "192.168.100.8", 00:19:35.672 "adrfam": "ipv4", 00:19:35.672 "trsvcid": "4420", 00:19:35.672 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:35.672 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:35.672 "hdgst": false, 00:19:35.672 "ddgst": false 00:19:35.672 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 },{ 00:19:35.672 "params": { 00:19:35.672 "name": "Nvme10", 00:19:35.672 "trtype": "rdma", 00:19:35.672 "traddr": "192.168.100.8", 00:19:35.672 "adrfam": "ipv4", 00:19:35.672 "trsvcid": "4420", 00:19:35.672 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:35.672 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:35.672 "hdgst": false, 00:19:35.672 "ddgst": false 00:19:35.672 }, 00:19:35.672 "method": "bdev_nvme_attach_controller" 00:19:35.672 }' 00:19:35.930 [2024-07-25 10:08:20.846435] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.930 [2024-07-25 10:08:20.918551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:36.861 Running I/O for 10 seconds... 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.861 10:08:21 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:37.118 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.118 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:37.118 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:37.118 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=147 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 147 -ge 100 ']' 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:19:37.374 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2591887 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2591887 ']' 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2591887 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:37.375 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2591887 00:19:37.632 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:37.632 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:37.632 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2591887' 00:19:37.632 killing process with pid 2591887 00:19:37.632 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2591887 00:19:37.632 10:08:22 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2591887 00:19:37.632 [2024-07-25 10:08:22.562544] rdma.c: 864:nvmf_rdma_qpair_destroy: *WARNING*: Destroying qpair when queue depth is 5 00:19:37.890 10:08:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:19:37.890 10:08:23 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:19:38.456 [2024-07-25 10:08:23.596109] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256900 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.598397] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256680 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.601075] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256400 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.603617] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256180 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.606079] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b60ee80 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.608512] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806bc0 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.611091] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806940 was disconnected and freed. reset controller. 00:19:38.456 [2024-07-25 10:08:23.611230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adf0000 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adbfe80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad9fd80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad3fa80 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad2fa00 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.611960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.611990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad0f900 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acff880 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001accf700 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acaf600 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac9f580 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac7f480 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac6f400 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac4f300 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac2f200 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac1f180 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183d00 00:19:38.456 [2024-07-25 10:08:23.612868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.612919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afdff80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.612970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.612998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afcff00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af9fd80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af4fb00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af1f980 len:0x10000 key:0x183b00 00:19:38.456 [2024-07-25 10:08:23.613593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.456 [2024-07-25 10:08:23.613622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x183f00 00:19:38.456 [2024-07-25 10:08:23.613643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.721 [2024-07-25 10:08:23.618667] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8066c0 was disconnected and freed. reset controller. 00:19:38.721 [2024-07-25 10:08:23.619321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1f0000 len:0x10000 key:0x183700 00:19:38.721 [2024-07-25 10:08:23.619361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1dff80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1cff00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:33152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1bfe80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b1afe00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:33408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b19fd80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:33536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b18fd00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:33664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b17fc80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:33792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b16fc00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:33920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b15fb80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b14fb00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:34176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b13fa80 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b12fa00 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.619959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.619985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b11f980 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b10f900 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ff880 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0ef800 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:34944 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0df780 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0cf700 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:35200 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0bf680 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:35328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0af600 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:35456 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09f580 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:35584 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b08f500 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:35712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b07f480 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:35840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b06f400 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f380 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f300 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b03f280 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f200 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f180 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f100 len:0x10000 key:0x183700 00:19:38.722 [2024-07-25 10:08:23.620917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183400 00:19:38.722 [2024-07-25 10:08:23.620965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.722 [2024-07-25 10:08:23.620991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3dff80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3cff00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:37248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b39fd80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:37504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b37fc80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b36fc00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:37888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b34fb00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:38144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b33fa80 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b31f980 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ff880 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2ef800 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2bf680 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:39296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2af600 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.621971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.621992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b27f480 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:39808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b22f200 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183400 00:19:38.723 [2024-07-25 10:08:23.622446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5f0000 len:0x10000 key:0x183300 00:19:38.723 [2024-07-25 10:08:23.622496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.723 [2024-07-25 10:08:23.622522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae0f700 len:0x10000 key:0x183b00 00:19:38.724 [2024-07-25 10:08:23.622544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625209] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b806440 was disconnected and freed. reset controller. 00:19:38.724 [2024-07-25 10:08:23.625261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4cfd00 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4bfc80 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25088 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b45f980 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:25728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b42f800 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183300 00:19:38.724 [2024-07-25 10:08:23.625878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:26240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.625926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.625957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:26368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7dff80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.625979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:26496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:26624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:27264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b76fc00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:27392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b75fb80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:27520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b74fb00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:27648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b72fa00 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:28032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:28160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ff880 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:28288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6ef800 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6cf700 len:0x10000 key:0x184300 00:19:38.724 [2024-07-25 10:08:23.626810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.724 [2024-07-25 10:08:23.626837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6bf680 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.626858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.626884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6af600 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.626906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.626933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.626954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.626980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b68f500 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:29184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b67f480 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:29312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b66f400 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:29440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:29568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b62f200 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b61f180 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x184300 00:19:38.725 [2024-07-25 10:08:23.627415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9dff80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9cff00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9bfe80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:30720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b97fc80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:31232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:31360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b95fb80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:31488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.627952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.627979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:31616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:31744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b90f900 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:32128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ff880 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:32384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8df780 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:32512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184500 00:19:38.725 [2024-07-25 10:08:23.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.628388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4dfd80 len:0x10000 key:0x183300 00:19:38.725 [2024-07-25 10:08:23.628410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:6eefc000 sqhd:52b0 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.631562] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20001b8061c0 was disconnected and freed. reset controller. 00:19:38.725 [2024-07-25 10:08:23.631714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.725 [2024-07-25 10:08:23.631742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.631766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.725 [2024-07-25 10:08:23.631787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.725 [2024-07-25 10:08:23.631810] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.725 [2024-07-25 10:08:23.631831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.631854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.631876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.633764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.726 [2024-07-25 10:08:23.633800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:19:38.726 [2024-07-25 10:08:23.633820] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.726 [2024-07-25 10:08:23.633860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.633883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.633913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.633935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.633957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.633978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.634001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.634021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.636202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.726 [2024-07-25 10:08:23.636235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:19:38.726 [2024-07-25 10:08:23.636253] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.726 [2024-07-25 10:08:23.636289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.636312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.636334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.636355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.636377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.636397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.636419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.636440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.638308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.726 [2024-07-25 10:08:23.638339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:19:38.726 [2024-07-25 10:08:23.638358] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.726 [2024-07-25 10:08:23.638392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.638413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.638437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.638458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.638480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.638500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.638530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.638551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.640736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.726 [2024-07-25 10:08:23.640767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:19:38.726 [2024-07-25 10:08:23.640785] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.726 [2024-07-25 10:08:23.640821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.640844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.640867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.640888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.640910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.640931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.640953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.640973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.643103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.726 [2024-07-25 10:08:23.643144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:19:38.726 [2024-07-25 10:08:23.643163] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.726 [2024-07-25 10:08:23.643197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.643218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.643241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.643262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.643284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.643305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.643327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.726 [2024-07-25 10:08:23.643348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.726 [2024-07-25 10:08:23.645396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.727 [2024-07-25 10:08:23.645427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:19:38.727 [2024-07-25 10:08:23.645445] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.645485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.645508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.645531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.645551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.645573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.645594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.645616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.645636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.647664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.727 [2024-07-25 10:08:23.647694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:38.727 [2024-07-25 10:08:23.647713] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.647749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.647771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.647794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.647814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.647837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.647857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.647879] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.647900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.649678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.727 [2024-07-25 10:08:23.649708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:38.727 [2024-07-25 10:08:23.649726] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.649761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.649783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.649807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.649828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.649857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.649878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.649900] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.649921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.651599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.727 [2024-07-25 10:08:23.651629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:19:38.727 [2024-07-25 10:08:23.651647] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.651680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.651703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.651725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.651745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.651767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.651787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.651809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:19:38.727 [2024-07-25 10:08:23.651830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:34508 cdw0:0 sqhd:3700 p:1 m:0 dnr:0 00:19:38.727 [2024-07-25 10:08:23.683481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:38.727 [2024-07-25 10:08:23.683499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:19:38.727 [2024-07-25 10:08:23.683506] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692746] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692756] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692766] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692775] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692785] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692796] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692804] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:19:38.727 [2024-07-25 10:08:23.692891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692907] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:19:38.727 [2024-07-25 10:08:23.692917] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:19:38.727 [2024-07-25 10:08:23.695110] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:19:38.727 task offset: 40960 on job bdev=Nvme6n1 fails 00:19:38.727 00:19:38.727 Latency(us) 00:19:38.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme1n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme1n1 : 1.89 135.46 8.47 33.86 0.00 375464.33 34702.87 1086524.46 00:19:38.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme2n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme2n1 : 1.89 135.39 8.46 33.85 0.00 372406.76 36450.50 1086524.46 00:19:38.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme3n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme3n1 : 1.89 135.32 8.46 33.83 0.00 369669.85 40445.07 1086524.46 00:19:38.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme4n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme4n1 : 1.89 151.10 9.44 33.81 0.00 335320.34 5336.50 1086524.46 00:19:38.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme5n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme5n1 : 1.89 143.63 8.98 33.80 0.00 346522.71 8738.13 1086524.46 00:19:38.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme6n1 ended in about 1.89 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.727 Nvme6n1 : 1.89 142.50 8.91 33.78 0.00 345311.74 13294.45 1078535.31 00:19:38.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.727 Job: Nvme7n1 ended in about 1.90 seconds with error 00:19:38.727 Verification LBA range: start 0x0 length 0x400 00:19:38.728 Nvme7n1 : 1.90 151.93 9.50 33.76 0.00 325045.44 18100.42 1078535.31 00:19:38.728 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.728 Job: Nvme8n1 ended in about 1.90 seconds with error 00:19:38.728 Verification LBA range: start 0x0 length 0x400 00:19:38.728 Nvme8n1 : 1.90 144.48 9.03 33.75 0.00 325438.96 25715.08 1078535.31 00:19:38.728 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.728 Job: Nvme9n1 ended in about 1.90 seconds with error 00:19:38.728 Verification LBA range: start 0x0 length 0x400 00:19:38.728 Nvme9n1 : 1.90 134.92 8.43 33.73 0.00 352176.03 59668.97 1142448.52 00:19:38.728 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:38.728 Job: Nvme10n1 ended in about 1.90 seconds with error 00:19:38.728 Verification LBA range: start 0x0 length 0x400 00:19:38.728 Nvme10n1 : 1.90 101.15 6.32 33.72 0.00 436391.74 63913.20 1126470.22 00:19:38.728 =================================================================================================================== 00:19:38.728 Total : 1375.89 85.99 337.89 0.00 355997.52 5336.50 1142448.52 00:19:38.728 [2024-07-25 10:08:23.714862] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:38.728 [2024-07-25 10:08:23.714880] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:19:38.728 [2024-07-25 10:08:23.714890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:19:38.728 [2024-07-25 10:08:23.724054] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.724105] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.724124] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:19:38.728 [2024-07-25 10:08:23.724265] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.724290] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.724307] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e5300 00:19:38.728 [2024-07-25 10:08:23.724424] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.724446] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.724461] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d9c80 00:19:38.728 [2024-07-25 10:08:23.728001] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.728040] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.728057] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192d2900 00:19:38.728 [2024-07-25 10:08:23.728161] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.728187] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.728203] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6340 00:19:38.728 [2024-07-25 10:08:23.728300] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.728324] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.728341] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c5040 00:19:38.728 [2024-07-25 10:08:23.728441] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.728464] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.728481] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a8500 00:19:38.728 [2024-07-25 10:08:23.729287] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.729315] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.729330] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929b1c0 00:19:38.728 [2024-07-25 10:08:23.729456] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.729489] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.729505] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e080 00:19:38.728 [2024-07-25 10:08:23.729595] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:19:38.728 [2024-07-25 10:08:23.729616] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:19:38.728 [2024-07-25 10:08:23.729632] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bf1c0 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2592174 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:38.988 rmmod nvme_rdma 00:19:38.988 rmmod nvme_fabrics 00:19:38.988 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 121: 2592174 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:38.988 00:19:38.988 real 0m5.161s 00:19:38.988 user 0m17.656s 00:19:38.988 sys 0m1.107s 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 ************************************ 00:19:38.988 END TEST nvmf_shutdown_tc3 00:19:38.988 ************************************ 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:19:38.988 00:19:38.988 real 0m23.435s 00:19:38.988 user 1m10.603s 00:19:38.988 sys 0m7.670s 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:38.988 10:08:24 nvmf_rdma.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:38.988 ************************************ 00:19:38.988 END TEST nvmf_shutdown 00:19:38.988 ************************************ 00:19:39.247 10:08:24 nvmf_rdma.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:19:39.247 00:19:39.247 real 8m37.041s 00:19:39.247 user 20m23.475s 00:19:39.247 sys 1m44.235s 00:19:39.247 10:08:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.247 10:08:24 nvmf_rdma.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:39.247 ************************************ 00:19:39.247 END TEST nvmf_target_extra 00:19:39.247 ************************************ 00:19:39.247 10:08:24 nvmf_rdma -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:19:39.247 10:08:24 nvmf_rdma -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.247 10:08:24 nvmf_rdma -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.247 10:08:24 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:19:39.247 ************************************ 00:19:39.247 START TEST nvmf_host 00:19:39.247 ************************************ 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=rdma 00:19:39.247 * Looking for test storage... 00:19:39.247 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- paths/export.sh@5 -- # export PATH 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.247 ************************************ 00:19:39.247 START TEST nvmf_multicontroller 00:19:39.247 ************************************ 00:19:39.247 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:19:39.507 * Looking for test storage... 00:19:39.507 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.507 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:19:39.508 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # exit 0 00:19:39.508 00:19:39.508 real 0m0.115s 00:19:39.508 user 0m0.059s 00:19:39.508 sys 0m0.064s 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:19:39.508 ************************************ 00:19:39.508 END TEST nvmf_multicontroller 00:19:39.508 ************************************ 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:39.508 ************************************ 00:19:39.508 START TEST nvmf_aer 00:19:39.508 ************************************ 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:19:39.508 * Looking for test storage... 00:19:39.508 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:39.508 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.767 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:19:39.768 10:08:24 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:45.108 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:45.108 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:45.108 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:45.109 Found net devices under 0000:da:00.0: mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:45.109 Found net devices under 0000:da:00.1: mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@420 -- # rdma_device_init 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # uname 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:45.109 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.109 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:45.109 altname enp218s0f0np0 00:19:45.109 altname ens818f0np0 00:19:45.109 inet 192.168.100.8/24 scope global mlx_0_0 00:19:45.109 valid_lft forever preferred_lft forever 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:45.109 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:45.109 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:45.109 altname enp218s0f1np1 00:19:45.109 altname ens818f1np1 00:19:45.109 inet 192.168.100.9/24 scope global mlx_0_1 00:19:45.109 valid_lft forever preferred_lft forever 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # continue 2 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:45.109 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:45.109 192.168.100.9' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:45.368 192.168.100.9' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # head -n 1 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:45.368 192.168.100.9' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # tail -n +2 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # head -n 1 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2595901 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2595901 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2595901 ']' 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:45.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:45.368 10:08:30 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:45.368 [2024-07-25 10:08:30.356399] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:45.368 [2024-07-25 10:08:30.356446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:45.368 EAL: No free 2048 kB hugepages reported on node 1 00:19:45.368 [2024-07-25 10:08:30.423959] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:45.368 [2024-07-25 10:08:30.502908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:45.368 [2024-07-25 10:08:30.502944] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:45.368 [2024-07-25 10:08:30.502951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:45.368 [2024-07-25 10:08:30.502956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:45.368 [2024-07-25 10:08:30.502962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:45.368 [2024-07-25 10:08:30.503022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:45.368 [2024-07-25 10:08:30.503057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.368 [2024-07-25 10:08:30.503168] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:45.368 [2024-07-25 10:08:30.503169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 [2024-07-25 10:08:31.225314] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe6ecc0/0xe731b0) succeed. 00:19:46.303 [2024-07-25 10:08:31.235031] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe70300/0xeb4840) succeed. 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 Malloc0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 [2024-07-25 10:08:31.399366] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.303 [ 00:19:46.303 { 00:19:46.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.303 "subtype": "Discovery", 00:19:46.303 "listen_addresses": [], 00:19:46.303 "allow_any_host": true, 00:19:46.303 "hosts": [] 00:19:46.303 }, 00:19:46.303 { 00:19:46.303 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.303 "subtype": "NVMe", 00:19:46.303 "listen_addresses": [ 00:19:46.303 { 00:19:46.303 "trtype": "RDMA", 00:19:46.303 "adrfam": "IPv4", 00:19:46.303 "traddr": "192.168.100.8", 00:19:46.303 "trsvcid": "4420" 00:19:46.303 } 00:19:46.303 ], 00:19:46.303 "allow_any_host": true, 00:19:46.303 "hosts": [], 00:19:46.303 "serial_number": "SPDK00000000000001", 00:19:46.303 "model_number": "SPDK bdev Controller", 00:19:46.303 "max_namespaces": 2, 00:19:46.303 "min_cntlid": 1, 00:19:46.303 "max_cntlid": 65519, 00:19:46.303 "namespaces": [ 00:19:46.303 { 00:19:46.303 "nsid": 1, 00:19:46.303 "bdev_name": "Malloc0", 00:19:46.303 "name": "Malloc0", 00:19:46.303 "nguid": "6D5827F06DDC4806A8375B37EFE3CEAA", 00:19:46.303 "uuid": "6d5827f0-6ddc-4806-a837-5b37efe3ceaa" 00:19:46.303 } 00:19:46.303 ] 00:19:46.303 } 00:19:46.303 ] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2596104 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:19:46.303 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:46.561 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.561 Malloc1 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.561 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.561 [ 00:19:46.561 { 00:19:46.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:46.561 "subtype": "Discovery", 00:19:46.561 "listen_addresses": [], 00:19:46.561 "allow_any_host": true, 00:19:46.561 "hosts": [] 00:19:46.561 }, 00:19:46.561 { 00:19:46.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.561 "subtype": "NVMe", 00:19:46.561 "listen_addresses": [ 00:19:46.561 { 00:19:46.562 "trtype": "RDMA", 00:19:46.562 "adrfam": "IPv4", 00:19:46.562 "traddr": "192.168.100.8", 00:19:46.562 "trsvcid": "4420" 00:19:46.562 } 00:19:46.562 ], 00:19:46.562 "allow_any_host": true, 00:19:46.562 "hosts": [], 00:19:46.562 "serial_number": "SPDK00000000000001", 00:19:46.562 "model_number": "SPDK bdev Controller", 00:19:46.562 "max_namespaces": 2, 00:19:46.562 "min_cntlid": 1, 00:19:46.562 "max_cntlid": 65519, 00:19:46.562 "namespaces": [ 00:19:46.562 { 00:19:46.562 "nsid": 1, 00:19:46.562 "bdev_name": "Malloc0", 00:19:46.562 "name": "Malloc0", 00:19:46.562 "nguid": "6D5827F06DDC4806A8375B37EFE3CEAA", 00:19:46.562 "uuid": "6d5827f0-6ddc-4806-a837-5b37efe3ceaa" 00:19:46.562 }, 00:19:46.562 { 00:19:46.562 "nsid": 2, 00:19:46.562 "bdev_name": "Malloc1", 00:19:46.562 "name": "Malloc1", 00:19:46.562 "nguid": "DA158834042945D2944B2957B3F38B19", 00:19:46.562 "uuid": "da158834-0429-45d2-944b-2957b3f38b19" 00:19:46.562 } 00:19:46.562 ] 00:19:46.562 } 00:19:46.562 ] 00:19:46.562 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.562 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2596104 00:19:46.820 Asynchronous Event Request test 00:19:46.820 Attaching to 192.168.100.8 00:19:46.820 Attached to 192.168.100.8 00:19:46.820 Registering asynchronous event callbacks... 00:19:46.820 Starting namespace attribute notice tests for all controllers... 00:19:46.820 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:46.820 aer_cb - Changed Namespace 00:19:46.820 Cleaning up... 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:46.820 rmmod nvme_rdma 00:19:46.820 rmmod nvme_fabrics 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2595901 ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2595901 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2595901 ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2595901 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2595901 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2595901' 00:19:46.820 killing process with pid 2595901 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2595901 00:19:46.820 10:08:31 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2595901 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:47.078 00:19:47.078 real 0m7.600s 00:19:47.078 user 0m8.183s 00:19:47.078 sys 0m4.661s 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:19:47.078 ************************************ 00:19:47.078 END TEST nvmf_aer 00:19:47.078 ************************************ 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:47.078 ************************************ 00:19:47.078 START TEST nvmf_async_init 00:19:47.078 ************************************ 00:19:47.078 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:19:47.337 * Looking for test storage... 00:19:47.337 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.337 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ccfbc5c58bf042c8b9386d2518cc8b3c 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:19:47.338 10:08:32 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:19:53.906 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:19:53.906 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:19:53.906 Found net devices under 0000:da:00.0: mlx_0_0 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:19:53.906 Found net devices under 0000:da:00.1: mlx_0_1 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@420 -- # rdma_device_init 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # uname 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@62 -- # modprobe ib_cm 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@63 -- # modprobe ib_core 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@64 -- # modprobe ib_umad 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@66 -- # modprobe iw_cm 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # allocate_nic_ips 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # get_rdma_if_list 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.906 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:19:53.907 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.907 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:19:53.907 altname enp218s0f0np0 00:19:53.907 altname ens818f0np0 00:19:53.907 inet 192.168.100.8/24 scope global mlx_0_0 00:19:53.907 valid_lft forever preferred_lft forever 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:19:53.907 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:53.907 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:19:53.907 altname enp218s0f1np1 00:19:53.907 altname ens818f1np1 00:19:53.907 inet 192.168.100.9/24 scope global mlx_0_1 00:19:53.907 valid_lft forever preferred_lft forever 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # get_rdma_if_list 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # echo mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # continue 2 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # awk '{print $4}' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@113 -- # cut -d/ -f1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:19:53.907 192.168.100.9' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:19:53.907 192.168.100.9' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # head -n 1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:19:53.907 192.168.100.9' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # tail -n +2 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # head -n 1 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:19:53.907 10:08:37 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2599382 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2599382 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2599382 ']' 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.907 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 [2024-07-25 10:08:38.062095] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:19:53.908 [2024-07-25 10:08:38.062151] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.908 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.908 [2024-07-25 10:08:38.128844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.908 [2024-07-25 10:08:38.207050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.908 [2024-07-25 10:08:38.207084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.908 [2024-07-25 10:08:38.207091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.908 [2024-07-25 10:08:38.207097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.908 [2024-07-25 10:08:38.207105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.908 [2024-07-25 10:08:38.207133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 [2024-07-25 10:08:38.925931] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdff910/0xe03e00) succeed. 00:19:53.908 [2024-07-25 10:08:38.935667] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe00e10/0xe45490) succeed. 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:38 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 null0 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ccfbc5c58bf042c8b9386d2518cc8b3c 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:53.908 [2024-07-25 10:08:39.035223] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.908 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 nvme0n1 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 [ 00:19:54.166 { 00:19:54.166 "name": "nvme0n1", 00:19:54.166 "aliases": [ 00:19:54.166 "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c" 00:19:54.166 ], 00:19:54.166 "product_name": "NVMe disk", 00:19:54.166 "block_size": 512, 00:19:54.166 "num_blocks": 2097152, 00:19:54.166 "uuid": "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c", 00:19:54.166 "assigned_rate_limits": { 00:19:54.166 "rw_ios_per_sec": 0, 00:19:54.166 "rw_mbytes_per_sec": 0, 00:19:54.166 "r_mbytes_per_sec": 0, 00:19:54.166 "w_mbytes_per_sec": 0 00:19:54.166 }, 00:19:54.166 "claimed": false, 00:19:54.166 "zoned": false, 00:19:54.166 "supported_io_types": { 00:19:54.166 "read": true, 00:19:54.166 "write": true, 00:19:54.166 "unmap": false, 00:19:54.166 "flush": true, 00:19:54.166 "reset": true, 00:19:54.166 "nvme_admin": true, 00:19:54.166 "nvme_io": true, 00:19:54.166 "nvme_io_md": false, 00:19:54.166 "write_zeroes": true, 00:19:54.166 "zcopy": false, 00:19:54.166 "get_zone_info": false, 00:19:54.166 "zone_management": false, 00:19:54.166 "zone_append": false, 00:19:54.166 "compare": true, 00:19:54.166 "compare_and_write": true, 00:19:54.166 "abort": true, 00:19:54.166 "seek_hole": false, 00:19:54.166 "seek_data": false, 00:19:54.166 "copy": true, 00:19:54.166 "nvme_iov_md": false 00:19:54.166 }, 00:19:54.166 "memory_domains": [ 00:19:54.166 { 00:19:54.166 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:54.166 "dma_device_type": 0 00:19:54.166 } 00:19:54.166 ], 00:19:54.166 "driver_specific": { 00:19:54.166 "nvme": [ 00:19:54.166 { 00:19:54.166 "trid": { 00:19:54.166 "trtype": "RDMA", 00:19:54.166 "adrfam": "IPv4", 00:19:54.166 "traddr": "192.168.100.8", 00:19:54.166 "trsvcid": "4420", 00:19:54.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:54.166 }, 00:19:54.166 "ctrlr_data": { 00:19:54.166 "cntlid": 1, 00:19:54.166 "vendor_id": "0x8086", 00:19:54.166 "model_number": "SPDK bdev Controller", 00:19:54.166 "serial_number": "00000000000000000000", 00:19:54.166 "firmware_revision": "24.09", 00:19:54.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.166 "oacs": { 00:19:54.166 "security": 0, 00:19:54.166 "format": 0, 00:19:54.166 "firmware": 0, 00:19:54.166 "ns_manage": 0 00:19:54.166 }, 00:19:54.166 "multi_ctrlr": true, 00:19:54.166 "ana_reporting": false 00:19:54.166 }, 00:19:54.166 "vs": { 00:19:54.166 "nvme_version": "1.3" 00:19:54.166 }, 00:19:54.166 "ns_data": { 00:19:54.166 "id": 1, 00:19:54.166 "can_share": true 00:19:54.166 } 00:19:54.166 } 00:19:54.166 ], 00:19:54.166 "mp_policy": "active_passive" 00:19:54.166 } 00:19:54.166 } 00:19:54.166 ] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 [2024-07-25 10:08:39.148787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:19:54.166 [2024-07-25 10:08:39.174784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:19:54.166 [2024-07-25 10:08:39.196584] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 [ 00:19:54.166 { 00:19:54.166 "name": "nvme0n1", 00:19:54.166 "aliases": [ 00:19:54.166 "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c" 00:19:54.166 ], 00:19:54.166 "product_name": "NVMe disk", 00:19:54.166 "block_size": 512, 00:19:54.166 "num_blocks": 2097152, 00:19:54.166 "uuid": "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c", 00:19:54.166 "assigned_rate_limits": { 00:19:54.166 "rw_ios_per_sec": 0, 00:19:54.166 "rw_mbytes_per_sec": 0, 00:19:54.166 "r_mbytes_per_sec": 0, 00:19:54.166 "w_mbytes_per_sec": 0 00:19:54.166 }, 00:19:54.166 "claimed": false, 00:19:54.166 "zoned": false, 00:19:54.166 "supported_io_types": { 00:19:54.166 "read": true, 00:19:54.166 "write": true, 00:19:54.166 "unmap": false, 00:19:54.166 "flush": true, 00:19:54.166 "reset": true, 00:19:54.166 "nvme_admin": true, 00:19:54.166 "nvme_io": true, 00:19:54.166 "nvme_io_md": false, 00:19:54.166 "write_zeroes": true, 00:19:54.166 "zcopy": false, 00:19:54.166 "get_zone_info": false, 00:19:54.166 "zone_management": false, 00:19:54.166 "zone_append": false, 00:19:54.166 "compare": true, 00:19:54.166 "compare_and_write": true, 00:19:54.166 "abort": true, 00:19:54.166 "seek_hole": false, 00:19:54.166 "seek_data": false, 00:19:54.166 "copy": true, 00:19:54.166 "nvme_iov_md": false 00:19:54.166 }, 00:19:54.166 "memory_domains": [ 00:19:54.166 { 00:19:54.166 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:54.166 "dma_device_type": 0 00:19:54.166 } 00:19:54.166 ], 00:19:54.166 "driver_specific": { 00:19:54.166 "nvme": [ 00:19:54.166 { 00:19:54.166 "trid": { 00:19:54.166 "trtype": "RDMA", 00:19:54.166 "adrfam": "IPv4", 00:19:54.166 "traddr": "192.168.100.8", 00:19:54.166 "trsvcid": "4420", 00:19:54.166 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:54.166 }, 00:19:54.166 "ctrlr_data": { 00:19:54.166 "cntlid": 2, 00:19:54.166 "vendor_id": "0x8086", 00:19:54.166 "model_number": "SPDK bdev Controller", 00:19:54.166 "serial_number": "00000000000000000000", 00:19:54.166 "firmware_revision": "24.09", 00:19:54.166 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.166 "oacs": { 00:19:54.166 "security": 0, 00:19:54.166 "format": 0, 00:19:54.166 "firmware": 0, 00:19:54.166 "ns_manage": 0 00:19:54.166 }, 00:19:54.166 "multi_ctrlr": true, 00:19:54.166 "ana_reporting": false 00:19:54.166 }, 00:19:54.166 "vs": { 00:19:54.166 "nvme_version": "1.3" 00:19:54.166 }, 00:19:54.166 "ns_data": { 00:19:54.166 "id": 1, 00:19:54.166 "can_share": true 00:19:54.166 } 00:19:54.166 } 00:19:54.166 ], 00:19:54.166 "mp_policy": "active_passive" 00:19:54.166 } 00:19:54.166 } 00:19:54.166 ] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.8dhgNDT53g 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.8dhgNDT53g 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 [2024-07-25 10:08:39.268137] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8dhgNDT53g 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8dhgNDT53g 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.166 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.166 [2024-07-25 10:08:39.288191] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:54.424 nvme0n1 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.424 [ 00:19:54.424 { 00:19:54.424 "name": "nvme0n1", 00:19:54.424 "aliases": [ 00:19:54.424 "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c" 00:19:54.424 ], 00:19:54.424 "product_name": "NVMe disk", 00:19:54.424 "block_size": 512, 00:19:54.424 "num_blocks": 2097152, 00:19:54.424 "uuid": "ccfbc5c5-8bf0-42c8-b938-6d2518cc8b3c", 00:19:54.424 "assigned_rate_limits": { 00:19:54.424 "rw_ios_per_sec": 0, 00:19:54.424 "rw_mbytes_per_sec": 0, 00:19:54.424 "r_mbytes_per_sec": 0, 00:19:54.424 "w_mbytes_per_sec": 0 00:19:54.424 }, 00:19:54.424 "claimed": false, 00:19:54.424 "zoned": false, 00:19:54.424 "supported_io_types": { 00:19:54.424 "read": true, 00:19:54.424 "write": true, 00:19:54.424 "unmap": false, 00:19:54.424 "flush": true, 00:19:54.424 "reset": true, 00:19:54.424 "nvme_admin": true, 00:19:54.424 "nvme_io": true, 00:19:54.424 "nvme_io_md": false, 00:19:54.424 "write_zeroes": true, 00:19:54.424 "zcopy": false, 00:19:54.424 "get_zone_info": false, 00:19:54.424 "zone_management": false, 00:19:54.424 "zone_append": false, 00:19:54.424 "compare": true, 00:19:54.424 "compare_and_write": true, 00:19:54.424 "abort": true, 00:19:54.424 "seek_hole": false, 00:19:54.424 "seek_data": false, 00:19:54.424 "copy": true, 00:19:54.424 "nvme_iov_md": false 00:19:54.424 }, 00:19:54.424 "memory_domains": [ 00:19:54.424 { 00:19:54.424 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:19:54.424 "dma_device_type": 0 00:19:54.424 } 00:19:54.424 ], 00:19:54.424 "driver_specific": { 00:19:54.424 "nvme": [ 00:19:54.424 { 00:19:54.424 "trid": { 00:19:54.424 "trtype": "RDMA", 00:19:54.424 "adrfam": "IPv4", 00:19:54.424 "traddr": "192.168.100.8", 00:19:54.424 "trsvcid": "4421", 00:19:54.424 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:19:54.424 }, 00:19:54.424 "ctrlr_data": { 00:19:54.424 "cntlid": 3, 00:19:54.424 "vendor_id": "0x8086", 00:19:54.424 "model_number": "SPDK bdev Controller", 00:19:54.424 "serial_number": "00000000000000000000", 00:19:54.424 "firmware_revision": "24.09", 00:19:54.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.424 "oacs": { 00:19:54.424 "security": 0, 00:19:54.424 "format": 0, 00:19:54.424 "firmware": 0, 00:19:54.424 "ns_manage": 0 00:19:54.424 }, 00:19:54.424 "multi_ctrlr": true, 00:19:54.424 "ana_reporting": false 00:19:54.424 }, 00:19:54.424 "vs": { 00:19:54.424 "nvme_version": "1.3" 00:19:54.424 }, 00:19:54.424 "ns_data": { 00:19:54.424 "id": 1, 00:19:54.424 "can_share": true 00:19:54.424 } 00:19:54.424 } 00:19:54.424 ], 00:19:54.424 "mp_policy": "active_passive" 00:19:54.424 } 00:19:54.424 } 00:19:54.424 ] 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.8dhgNDT53g 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:19:54.424 rmmod nvme_rdma 00:19:54.424 rmmod nvme_fabrics 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2599382 ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2599382 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2599382 ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2599382 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2599382 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2599382' 00:19:54.424 killing process with pid 2599382 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2599382 00:19:54.424 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2599382 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:19:54.683 00:19:54.683 real 0m7.480s 00:19:54.683 user 0m3.440s 00:19:54.683 sys 0m4.647s 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 ************************************ 00:19:54.683 END TEST nvmf_async_init 00:19:54.683 ************************************ 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:54.683 ************************************ 00:19:54.683 START TEST dma 00:19:54.683 ************************************ 00:19:54.683 10:08:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:19:54.941 * Looking for test storage... 00:19:54.941 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@18 -- # subsystem=0 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- host/dma.sh@93 -- # nvmftestinit 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@285 -- # xtrace_disable 00:19:54.941 10:08:39 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # pci_devs=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # net_devs=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # e810=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@296 -- # local -ga e810 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # x722=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@297 -- # local -ga x722 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # mlx=() 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@298 -- # local -ga mlx 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:00.210 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:00.210 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:00.210 Found net devices under 0000:da:00.0: mlx_0_0 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:00.210 Found net devices under 0000:da:00.1: mlx_0_1 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@414 -- # is_hw=yes 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@420 -- # rdma_device_init 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # uname 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:00.210 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:00.469 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:00.470 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.470 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:00.470 altname enp218s0f0np0 00:20:00.470 altname ens818f0np0 00:20:00.470 inet 192.168.100.8/24 scope global mlx_0_0 00:20:00.470 valid_lft forever preferred_lft forever 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:00.470 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:00.470 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:00.470 altname enp218s0f1np1 00:20:00.470 altname ens818f1np1 00:20:00.470 inet 192.168.100.9/24 scope global mlx_0_1 00:20:00.470 valid_lft forever preferred_lft forever 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@422 -- # return 0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@105 -- # continue 2 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:00.470 192.168.100.9' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:00.470 192.168.100.9' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # head -n 1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:00.470 192.168.100.9' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # tail -n +2 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # head -n 1 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@481 -- # nvmfpid=2602695 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@482 -- # waitforlisten 2602695 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@831 -- # '[' -z 2602695 ']' 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:00.470 10:08:45 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:00.470 [2024-07-25 10:08:45.596229] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:00.471 [2024-07-25 10:08:45.596283] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.471 EAL: No free 2048 kB hugepages reported on node 1 00:20:00.729 [2024-07-25 10:08:45.662825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:00.729 [2024-07-25 10:08:45.744777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.729 [2024-07-25 10:08:45.744811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.729 [2024-07-25 10:08:45.744818] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.729 [2024-07-25 10:08:45.744824] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.729 [2024-07-25 10:08:45.744829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.729 [2024-07-25 10:08:45.744874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.729 [2024-07-25 10:08:45.744876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@864 -- # return 0 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.296 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.555 [2024-07-25 10:08:46.459875] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14cf3c0/0x14d38b0) succeed. 00:20:01.555 [2024-07-25 10:08:46.468714] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14d0870/0x1514f40) succeed. 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.555 Malloc0 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:01.555 [2024-07-25 10:08:46.636150] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # config=() 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@532 -- # local subsystem config 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:01.555 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:01.555 { 00:20:01.555 "params": { 00:20:01.555 "name": "Nvme$subsystem", 00:20:01.555 "trtype": "$TEST_TRANSPORT", 00:20:01.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.555 "adrfam": "ipv4", 00:20:01.555 "trsvcid": "$NVMF_PORT", 00:20:01.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.556 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.556 "hdgst": ${hdgst:-false}, 00:20:01.556 "ddgst": ${ddgst:-false} 00:20:01.556 }, 00:20:01.556 "method": "bdev_nvme_attach_controller" 00:20:01.556 } 00:20:01.556 EOF 00:20:01.556 )") 00:20:01.556 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@554 -- # cat 00:20:01.556 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@556 -- # jq . 00:20:01.556 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@557 -- # IFS=, 00:20:01.556 10:08:46 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:01.556 "params": { 00:20:01.556 "name": "Nvme0", 00:20:01.556 "trtype": "rdma", 00:20:01.556 "traddr": "192.168.100.8", 00:20:01.556 "adrfam": "ipv4", 00:20:01.556 "trsvcid": "4420", 00:20:01.556 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.556 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.556 "hdgst": false, 00:20:01.556 "ddgst": false 00:20:01.556 }, 00:20:01.556 "method": "bdev_nvme_attach_controller" 00:20:01.556 }' 00:20:01.556 [2024-07-25 10:08:46.681577] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:01.556 [2024-07-25 10:08:46.681627] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602944 ] 00:20:01.556 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.814 [2024-07-25 10:08:46.745760] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:01.814 [2024-07-25 10:08:46.819654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:01.814 [2024-07-25 10:08:46.819655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.080 bdev Nvme0n1 reports 1 memory domains 00:20:07.080 bdev Nvme0n1 supports RDMA memory domain 00:20:07.080 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:07.080 ========================================================================== 00:20:07.080 Latency [us] 00:20:07.080 IOPS MiB/s Average min max 00:20:07.080 Core 2: 21588.29 84.33 740.39 253.99 8616.06 00:20:07.080 Core 3: 21594.49 84.35 740.18 260.83 8654.39 00:20:07.080 ========================================================================== 00:20:07.080 Total : 43182.78 168.68 740.29 253.99 8654.39 00:20:07.080 00:20:07.080 Total operations: 215971, translate 215971 pull_push 0 memzero 0 00:20:07.080 10:08:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push 00:20:07.080 10:08:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@107 -- # gen_malloc_json 00:20:07.080 10:08:52 nvmf_rdma.nvmf_host.dma -- host/dma.sh@21 -- # jq . 00:20:07.339 [2024-07-25 10:08:52.255613] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:07.339 [2024-07-25 10:08:52.255665] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2603858 ] 00:20:07.339 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.339 [2024-07-25 10:08:52.324397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:07.339 [2024-07-25 10:08:52.394370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.339 [2024-07-25 10:08:52.394372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:12.604 bdev Malloc0 reports 2 memory domains 00:20:12.605 bdev Malloc0 doesn't support RDMA memory domain 00:20:12.605 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:12.605 ========================================================================== 00:20:12.605 Latency [us] 00:20:12.605 IOPS MiB/s Average min max 00:20:12.605 Core 2: 14238.26 55.62 1122.95 413.32 1425.46 00:20:12.605 Core 3: 14223.07 55.56 1124.12 457.22 1927.01 00:20:12.605 ========================================================================== 00:20:12.605 Total : 28461.34 111.18 1123.53 413.32 1927.01 00:20:12.605 00:20:12.605 Total operations: 142364, translate 0 pull_push 569456 memzero 0 00:20:12.605 10:08:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero 00:20:12.605 10:08:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:20:12.605 10:08:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:12.605 10:08:57 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:12.605 Ignoring -M option 00:20:12.605 [2024-07-25 10:08:57.740685] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:12.605 [2024-07-25 10:08:57.740735] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604777 ] 00:20:12.605 EAL: No free 2048 kB hugepages reported on node 1 00:20:12.863 [2024-07-25 10:08:57.807644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:12.863 [2024-07-25 10:08:57.877310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:12.863 [2024-07-25 10:08:57.877312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.455 bdev aca56aec-431d-4e96-abea-055b4cb216c9 reports 1 memory domains 00:20:19.455 bdev aca56aec-431d-4e96-abea-055b4cb216c9 supports RDMA memory domain 00:20:19.455 Initialization complete, running randread IO for 5 sec on 2 cores 00:20:19.455 ========================================================================== 00:20:19.455 Latency [us] 00:20:19.455 IOPS MiB/s Average min max 00:20:19.455 Core 2: 75579.76 295.23 210.95 75.60 2793.91 00:20:19.455 Core 3: 75291.80 294.11 211.76 57.40 2734.19 00:20:19.456 ========================================================================== 00:20:19.456 Total : 150871.56 589.34 211.35 57.40 2793.91 00:20:19.456 00:20:19.456 Total operations: 754448, translate 0 pull_push 0 memzero 754448 00:20:19.456 10:09:03 nvmf_rdma.nvmf_host.dma -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:20:19.456 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.456 [2024-07-25 10:09:03.419662] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:20.832 Initializing NVMe Controllers 00:20:20.832 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:20:20.832 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:20.832 Initialization complete. Launching workers. 00:20:20.832 ======================================================== 00:20:20.832 Latency(us) 00:20:20.832 Device Information : IOPS MiB/s Average min max 00:20:20.832 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2024.69 7.91 7956.57 6585.13 8381.18 00:20:20.832 ======================================================== 00:20:20.832 Total : 2024.69 7.91 7956.57 6585.13 8381.18 00:20:20.832 00:20:20.832 10:09:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate 00:20:20.832 10:09:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:20:20.832 10:09:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@48 -- # local subsystem=0 00:20:20.832 10:09:05 nvmf_rdma.nvmf_host.dma -- host/dma.sh@50 -- # jq . 00:20:20.832 [2024-07-25 10:09:05.743176] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:20.832 [2024-07-25 10:09:05.743215] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606066 ] 00:20:20.832 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.832 [2024-07-25 10:09:05.807826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:20.832 [2024-07-25 10:09:05.881875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:20.832 [2024-07-25 10:09:05.881875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:27.392 bdev b43cfe87-f53e-42b9-ab94-0ec189b60a1d reports 1 memory domains 00:20:27.392 bdev b43cfe87-f53e-42b9-ab94-0ec189b60a1d supports RDMA memory domain 00:20:27.392 Initialization complete, running randrw IO for 5 sec on 2 cores 00:20:27.392 ========================================================================== 00:20:27.392 Latency [us] 00:20:27.392 IOPS MiB/s Average min max 00:20:27.392 Core 2: 19060.16 74.45 838.69 14.67 8814.79 00:20:27.392 Core 3: 19184.92 74.94 833.24 15.39 8936.18 00:20:27.392 ========================================================================== 00:20:27.392 Total : 38245.08 149.39 835.96 14.67 8936.18 00:20:27.392 00:20:27.392 Total operations: 191280, translate 191172 pull_push 0 memzero 108 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- host/dma.sh@120 -- # nvmftestfini 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@117 -- # sync 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@120 -- # set +e 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:27.392 rmmod nvme_rdma 00:20:27.392 rmmod nvme_fabrics 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@124 -- # set -e 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@125 -- # return 0 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@489 -- # '[' -n 2602695 ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@490 -- # killprocess 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@950 -- # '[' -z 2602695 ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@954 -- # kill -0 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # uname 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2602695' 00:20:27.392 killing process with pid 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@969 -- # kill 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@974 -- # wait 2602695 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:27.392 00:20:27.392 real 0m31.964s 00:20:27.392 user 1m36.471s 00:20:27.392 sys 0m5.345s 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:27.392 ************************************ 00:20:27.392 END TEST dma 00:20:27.392 ************************************ 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.392 ************************************ 00:20:27.392 START TEST nvmf_identify 00:20:27.392 ************************************ 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:20:27.392 * Looking for test storage... 00:20:27.392 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.392 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:27.393 10:09:11 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:32.669 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:32.669 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:32.669 Found net devices under 0000:da:00.0: mlx_0_0 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:32.669 Found net devices under 0000:da:00.1: mlx_0_1 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@420 -- # rdma_device_init 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # uname 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:32.669 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:32.670 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.670 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:32.670 altname enp218s0f0np0 00:20:32.670 altname ens818f0np0 00:20:32.670 inet 192.168.100.8/24 scope global mlx_0_0 00:20:32.670 valid_lft forever preferred_lft forever 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:32.670 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:32.670 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:32.670 altname enp218s0f1np1 00:20:32.670 altname ens818f1np1 00:20:32.670 inet 192.168.100.9/24 scope global mlx_0_1 00:20:32.670 valid_lft forever preferred_lft forever 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # continue 2 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:32.670 192.168.100.9' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:32.670 192.168.100.9' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # head -n 1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:32.670 192.168.100.9' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # tail -n +2 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # head -n 1 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2610648 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2610648 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2610648 ']' 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:32.670 10:09:17 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:32.670 [2024-07-25 10:09:17.612089] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:32.670 [2024-07-25 10:09:17.612157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.670 EAL: No free 2048 kB hugepages reported on node 1 00:20:32.670 [2024-07-25 10:09:17.679263] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.670 [2024-07-25 10:09:17.761039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.670 [2024-07-25 10:09:17.761073] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.670 [2024-07-25 10:09:17.761090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.670 [2024-07-25 10:09:17.761095] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.670 [2024-07-25 10:09:17.761100] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.670 [2024-07-25 10:09:17.761197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.670 [2024-07-25 10:09:17.761326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.670 [2024-07-25 10:09:17.761416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.670 [2024-07-25 10:09:17.761417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 [2024-07-25 10:09:18.448304] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc2acc0/0xc2f1b0) succeed. 00:20:33.603 [2024-07-25 10:09:18.457267] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc2c300/0xc70840) succeed. 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 Malloc0 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 [2024-07-25 10:09:18.657682] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.603 [ 00:20:33.603 { 00:20:33.603 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:33.603 "subtype": "Discovery", 00:20:33.603 "listen_addresses": [ 00:20:33.603 { 00:20:33.603 "trtype": "RDMA", 00:20:33.603 "adrfam": "IPv4", 00:20:33.603 "traddr": "192.168.100.8", 00:20:33.603 "trsvcid": "4420" 00:20:33.603 } 00:20:33.603 ], 00:20:33.603 "allow_any_host": true, 00:20:33.603 "hosts": [] 00:20:33.603 }, 00:20:33.603 { 00:20:33.603 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.603 "subtype": "NVMe", 00:20:33.603 "listen_addresses": [ 00:20:33.603 { 00:20:33.603 "trtype": "RDMA", 00:20:33.603 "adrfam": "IPv4", 00:20:33.603 "traddr": "192.168.100.8", 00:20:33.603 "trsvcid": "4420" 00:20:33.603 } 00:20:33.603 ], 00:20:33.603 "allow_any_host": true, 00:20:33.603 "hosts": [], 00:20:33.603 "serial_number": "SPDK00000000000001", 00:20:33.603 "model_number": "SPDK bdev Controller", 00:20:33.603 "max_namespaces": 32, 00:20:33.603 "min_cntlid": 1, 00:20:33.603 "max_cntlid": 65519, 00:20:33.603 "namespaces": [ 00:20:33.603 { 00:20:33.603 "nsid": 1, 00:20:33.603 "bdev_name": "Malloc0", 00:20:33.603 "name": "Malloc0", 00:20:33.603 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:33.603 "eui64": "ABCDEF0123456789", 00:20:33.603 "uuid": "64175a12-3b37-4473-9fc2-b733ef114a87" 00:20:33.603 } 00:20:33.603 ] 00:20:33.603 } 00:20:33.603 ] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.603 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:33.603 [2024-07-25 10:09:18.709153] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:33.603 [2024-07-25 10:09:18.709209] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610760 ] 00:20:33.603 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.603 [2024-07-25 10:09:18.753453] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:33.603 [2024-07-25 10:09:18.753535] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:33.603 [2024-07-25 10:09:18.753548] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:33.603 [2024-07-25 10:09:18.753551] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:33.603 [2024-07-25 10:09:18.753576] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:33.868 [2024-07-25 10:09:18.765584] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:33.868 [2024-07-25 10:09:18.775909] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:33.868 [2024-07-25 10:09:18.775918] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:33.868 [2024-07-25 10:09:18.775924] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775929] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775933] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775938] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775942] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775946] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775950] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775954] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775959] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775963] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775967] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775971] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775979] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775983] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775987] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775991] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.775995] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776000] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776004] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776008] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776013] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776017] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.868 [2024-07-25 10:09:18.776021] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776025] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776029] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776034] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776038] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776042] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776046] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776050] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776054] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776058] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:33.869 [2024-07-25 10:09:18.776062] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:33.869 [2024-07-25 10:09:18.776065] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:33.869 [2024-07-25 10:09:18.776082] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.776093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182000 00:20:33.869 [2024-07-25 10:09:18.781136] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781153] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781159] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:33.869 [2024-07-25 10:09:18.781164] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781169] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781180] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781214] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781224] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781228] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781233] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781238] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781270] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781279] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781283] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781288] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781293] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781317] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781326] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781330] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781336] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781342] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781358] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781367] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:33.869 [2024-07-25 10:09:18.781371] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781375] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781379] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781484] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:33.869 [2024-07-25 10:09:18.781490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781500] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781531] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781540] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:33.869 [2024-07-25 10:09:18.781544] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781550] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781574] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781582] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:33.869 [2024-07-25 10:09:18.781586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:33.869 [2024-07-25 10:09:18.781590] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781595] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:33.869 [2024-07-25 10:09:18.781601] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:33.869 [2024-07-25 10:09:18.781609] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:20:33.869 [2024-07-25 10:09:18.781654] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781665] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:33.869 [2024-07-25 10:09:18.781669] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:33.869 [2024-07-25 10:09:18.781673] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:33.869 [2024-07-25 10:09:18.781679] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:33.869 [2024-07-25 10:09:18.781683] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:33.869 [2024-07-25 10:09:18.781687] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:33.869 [2024-07-25 10:09:18.781691] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781698] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:33.869 [2024-07-25 10:09:18.781704] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.869 [2024-07-25 10:09:18.781729] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.869 [2024-07-25 10:09:18.781733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:33.869 [2024-07-25 10:09:18.781740] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.869 [2024-07-25 10:09:18.781750] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182000 00:20:33.869 [2024-07-25 10:09:18.781755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.870 [2024-07-25 10:09:18.781760] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.870 [2024-07-25 10:09:18.781770] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.870 [2024-07-25 10:09:18.781779] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:33.870 [2024-07-25 10:09:18.781783] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781789] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:33.870 [2024-07-25 10:09:18.781795] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.870 [2024-07-25 10:09:18.781817] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.781822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.781827] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:33.870 [2024-07-25 10:09:18.781831] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:33.870 [2024-07-25 10:09:18.781835] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781842] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:20:33.870 [2024-07-25 10:09:18.781875] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.781881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.781886] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781893] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:33.870 [2024-07-25 10:09:18.781912] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x182000 00:20:33.870 [2024-07-25 10:09:18.781925] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.870 [2024-07-25 10:09:18.781954] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.781958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.781967] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x182000 00:20:33.870 [2024-07-25 10:09:18.781977] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.781982] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.781986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.781990] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.782006] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.782011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.782018] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.782024] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x182000 00:20:33.870 [2024-07-25 10:09:18.782029] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.870 [2024-07-25 10:09:18.782055] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.870 [2024-07-25 10:09:18.782059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:33.870 [2024-07-25 10:09:18.782067] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.870 ===================================================== 00:20:33.870 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:33.870 ===================================================== 00:20:33.870 Controller Capabilities/Features 00:20:33.870 ================================ 00:20:33.870 Vendor ID: 0000 00:20:33.870 Subsystem Vendor ID: 0000 00:20:33.870 Serial Number: .................... 00:20:33.870 Model Number: ........................................ 00:20:33.870 Firmware Version: 24.09 00:20:33.870 Recommended Arb Burst: 0 00:20:33.870 IEEE OUI Identifier: 00 00 00 00:20:33.870 Multi-path I/O 00:20:33.870 May have multiple subsystem ports: No 00:20:33.870 May have multiple controllers: No 00:20:33.870 Associated with SR-IOV VF: No 00:20:33.870 Max Data Transfer Size: 131072 00:20:33.870 Max Number of Namespaces: 0 00:20:33.870 Max Number of I/O Queues: 1024 00:20:33.870 NVMe Specification Version (VS): 1.3 00:20:33.870 NVMe Specification Version (Identify): 1.3 00:20:33.870 Maximum Queue Entries: 128 00:20:33.870 Contiguous Queues Required: Yes 00:20:33.870 Arbitration Mechanisms Supported 00:20:33.870 Weighted Round Robin: Not Supported 00:20:33.870 Vendor Specific: Not Supported 00:20:33.870 Reset Timeout: 15000 ms 00:20:33.870 Doorbell Stride: 4 bytes 00:20:33.870 NVM Subsystem Reset: Not Supported 00:20:33.870 Command Sets Supported 00:20:33.870 NVM Command Set: Supported 00:20:33.870 Boot Partition: Not Supported 00:20:33.870 Memory Page Size Minimum: 4096 bytes 00:20:33.870 Memory Page Size Maximum: 4096 bytes 00:20:33.870 Persistent Memory Region: Not Supported 00:20:33.870 Optional Asynchronous Events Supported 00:20:33.870 Namespace Attribute Notices: Not Supported 00:20:33.870 Firmware Activation Notices: Not Supported 00:20:33.870 ANA Change Notices: Not Supported 00:20:33.870 PLE Aggregate Log Change Notices: Not Supported 00:20:33.870 LBA Status Info Alert Notices: Not Supported 00:20:33.870 EGE Aggregate Log Change Notices: Not Supported 00:20:33.870 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.870 Zone Descriptor Change Notices: Not Supported 00:20:33.870 Discovery Log Change Notices: Supported 00:20:33.870 Controller Attributes 00:20:33.870 128-bit Host Identifier: Not Supported 00:20:33.870 Non-Operational Permissive Mode: Not Supported 00:20:33.870 NVM Sets: Not Supported 00:20:33.870 Read Recovery Levels: Not Supported 00:20:33.870 Endurance Groups: Not Supported 00:20:33.870 Predictable Latency Mode: Not Supported 00:20:33.870 Traffic Based Keep ALive: Not Supported 00:20:33.870 Namespace Granularity: Not Supported 00:20:33.870 SQ Associations: Not Supported 00:20:33.870 UUID List: Not Supported 00:20:33.870 Multi-Domain Subsystem: Not Supported 00:20:33.870 Fixed Capacity Management: Not Supported 00:20:33.870 Variable Capacity Management: Not Supported 00:20:33.870 Delete Endurance Group: Not Supported 00:20:33.870 Delete NVM Set: Not Supported 00:20:33.870 Extended LBA Formats Supported: Not Supported 00:20:33.870 Flexible Data Placement Supported: Not Supported 00:20:33.870 00:20:33.870 Controller Memory Buffer Support 00:20:33.870 ================================ 00:20:33.870 Supported: No 00:20:33.870 00:20:33.870 Persistent Memory Region Support 00:20:33.870 ================================ 00:20:33.870 Supported: No 00:20:33.870 00:20:33.870 Admin Command Set Attributes 00:20:33.870 ============================ 00:20:33.870 Security Send/Receive: Not Supported 00:20:33.870 Format NVM: Not Supported 00:20:33.870 Firmware Activate/Download: Not Supported 00:20:33.870 Namespace Management: Not Supported 00:20:33.870 Device Self-Test: Not Supported 00:20:33.870 Directives: Not Supported 00:20:33.870 NVMe-MI: Not Supported 00:20:33.870 Virtualization Management: Not Supported 00:20:33.870 Doorbell Buffer Config: Not Supported 00:20:33.870 Get LBA Status Capability: Not Supported 00:20:33.870 Command & Feature Lockdown Capability: Not Supported 00:20:33.870 Abort Command Limit: 1 00:20:33.870 Async Event Request Limit: 4 00:20:33.870 Number of Firmware Slots: N/A 00:20:33.870 Firmware Slot 1 Read-Only: N/A 00:20:33.870 Firmware Activation Without Reset: N/A 00:20:33.870 Multiple Update Detection Support: N/A 00:20:33.870 Firmware Update Granularity: No Information Provided 00:20:33.871 Per-Namespace SMART Log: No 00:20:33.871 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.871 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:33.871 Command Effects Log Page: Not Supported 00:20:33.871 Get Log Page Extended Data: Supported 00:20:33.871 Telemetry Log Pages: Not Supported 00:20:33.871 Persistent Event Log Pages: Not Supported 00:20:33.871 Supported Log Pages Log Page: May Support 00:20:33.871 Commands Supported & Effects Log Page: Not Supported 00:20:33.871 Feature Identifiers & Effects Log Page:May Support 00:20:33.871 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.871 Data Area 4 for Telemetry Log: Not Supported 00:20:33.871 Error Log Page Entries Supported: 128 00:20:33.871 Keep Alive: Not Supported 00:20:33.871 00:20:33.871 NVM Command Set Attributes 00:20:33.871 ========================== 00:20:33.871 Submission Queue Entry Size 00:20:33.871 Max: 1 00:20:33.871 Min: 1 00:20:33.871 Completion Queue Entry Size 00:20:33.871 Max: 1 00:20:33.871 Min: 1 00:20:33.871 Number of Namespaces: 0 00:20:33.871 Compare Command: Not Supported 00:20:33.871 Write Uncorrectable Command: Not Supported 00:20:33.871 Dataset Management Command: Not Supported 00:20:33.871 Write Zeroes Command: Not Supported 00:20:33.871 Set Features Save Field: Not Supported 00:20:33.871 Reservations: Not Supported 00:20:33.871 Timestamp: Not Supported 00:20:33.871 Copy: Not Supported 00:20:33.871 Volatile Write Cache: Not Present 00:20:33.871 Atomic Write Unit (Normal): 1 00:20:33.871 Atomic Write Unit (PFail): 1 00:20:33.871 Atomic Compare & Write Unit: 1 00:20:33.871 Fused Compare & Write: Supported 00:20:33.871 Scatter-Gather List 00:20:33.871 SGL Command Set: Supported 00:20:33.871 SGL Keyed: Supported 00:20:33.871 SGL Bit Bucket Descriptor: Not Supported 00:20:33.871 SGL Metadata Pointer: Not Supported 00:20:33.871 Oversized SGL: Not Supported 00:20:33.871 SGL Metadata Address: Not Supported 00:20:33.871 SGL Offset: Supported 00:20:33.871 Transport SGL Data Block: Not Supported 00:20:33.871 Replay Protected Memory Block: Not Supported 00:20:33.871 00:20:33.871 Firmware Slot Information 00:20:33.871 ========================= 00:20:33.871 Active slot: 0 00:20:33.871 00:20:33.871 00:20:33.871 Error Log 00:20:33.871 ========= 00:20:33.871 00:20:33.871 Active Namespaces 00:20:33.871 ================= 00:20:33.871 Discovery Log Page 00:20:33.871 ================== 00:20:33.871 Generation Counter: 2 00:20:33.871 Number of Records: 2 00:20:33.871 Record Format: 0 00:20:33.871 00:20:33.871 Discovery Log Entry 0 00:20:33.871 ---------------------- 00:20:33.871 Transport Type: 1 (RDMA) 00:20:33.871 Address Family: 1 (IPv4) 00:20:33.871 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:33.871 Entry Flags: 00:20:33.871 Duplicate Returned Information: 1 00:20:33.871 Explicit Persistent Connection Support for Discovery: 1 00:20:33.871 Transport Requirements: 00:20:33.871 Secure Channel: Not Required 00:20:33.871 Port ID: 0 (0x0000) 00:20:33.871 Controller ID: 65535 (0xffff) 00:20:33.871 Admin Max SQ Size: 128 00:20:33.871 Transport Service Identifier: 4420 00:20:33.871 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:33.871 Transport Address: 192.168.100.8 00:20:33.871 Transport Specific Address Subtype - RDMA 00:20:33.871 RDMA QP Service Type: 1 (Reliable Connected) 00:20:33.871 RDMA Provider Type: 1 (No provider specified) 00:20:33.871 RDMA CM Service: 1 (RDMA_CM) 00:20:33.871 Discovery Log Entry 1 00:20:33.871 ---------------------- 00:20:33.871 Transport Type: 1 (RDMA) 00:20:33.871 Address Family: 1 (IPv4) 00:20:33.871 Subsystem Type: 2 (NVM Subsystem) 00:20:33.871 Entry Flags: 00:20:33.871 Duplicate Returned Information: 0 00:20:33.871 Explicit Persistent Connection Support for Discovery: 0 00:20:33.871 Transport Requirements: 00:20:33.871 Secure Channel: Not Required 00:20:33.871 Port ID: 0 (0x0000) 00:20:33.871 Controller ID: 65535 (0xffff) 00:20:33.871 Admin Max SQ Size: [2024-07-25 10:09:18.782139] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:33.871 [2024-07-25 10:09:18.782147] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49202 doesn't match qid 00:20:33.871 [2024-07-25 10:09:18.782159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32692 cdw0:5 sqhd:8f40 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782164] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49202 doesn't match qid 00:20:33.871 [2024-07-25 10:09:18.782170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32692 cdw0:5 sqhd:8f40 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782175] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49202 doesn't match qid 00:20:33.871 [2024-07-25 10:09:18.782182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32692 cdw0:5 sqhd:8f40 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782187] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 49202 doesn't match qid 00:20:33.871 [2024-07-25 10:09:18.782192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32692 cdw0:5 sqhd:8f40 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782199] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782221] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782234] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782239] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782244] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782265] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782274] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:33.871 [2024-07-25 10:09:18.782277] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:33.871 [2024-07-25 10:09:18.782282] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782288] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782294] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782311] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782321] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782329] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782353] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782362] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782369] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782392] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782403] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782410] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782435] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.871 [2024-07-25 10:09:18.782440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:33.871 [2024-07-25 10:09:18.782444] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782451] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.871 [2024-07-25 10:09:18.782457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.871 [2024-07-25 10:09:18.782475] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782484] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782491] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782512] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782521] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782528] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782552] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782561] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782568] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782595] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782604] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782611] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782637] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782647] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782654] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782679] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782688] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782695] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782722] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782731] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782738] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782762] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782771] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782778] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782804] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782812] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782819] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782850] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782859] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782865] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782887] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782897] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782904] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782909] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782928] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782936] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782943] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.782964] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.782969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.782973] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782980] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.782986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.783001] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.783006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.783010] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783017] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.783043] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.783047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.783051] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783058] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.783083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.783087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.783091] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783098] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.783125] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.783136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.783141] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783148] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.872 [2024-07-25 10:09:18.783169] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.872 [2024-07-25 10:09:18.783173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.872 [2024-07-25 10:09:18.783178] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.872 [2024-07-25 10:09:18.783185] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783207] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783216] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783223] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783250] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783259] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783266] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783293] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783301] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783308] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783331] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783340] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783346] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783375] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783384] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783391] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783415] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783423] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783430] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783455] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783463] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783470] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783494] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783503] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783510] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783531] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783540] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783547] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783572] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783581] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783588] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783615] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783624] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783631] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783655] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783663] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783670] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783676] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783696] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783704] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783711] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783738] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783747] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783754] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.873 [2024-07-25 10:09:18.783760] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.873 [2024-07-25 10:09:18.783782] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.873 [2024-07-25 10:09:18.783787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:33.873 [2024-07-25 10:09:18.783791] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783798] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783804] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.783821] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.783825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.783829] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783836] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783843] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.783862] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.783866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.783870] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783878] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783883] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.783906] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.783911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.783915] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783922] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.783946] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.783950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.783955] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783962] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.783983] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.783987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.783992] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.783998] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784022] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784031] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784038] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784062] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784071] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784078] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784101] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784110] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784117] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784122] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784148] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784157] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784164] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784190] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784199] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784205] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784211] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784236] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784244] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784251] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784277] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784286] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784293] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784319] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784327] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784335] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784363] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784371] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784378] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784402] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784411] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784418] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784440] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784449] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784456] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784477] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.874 [2024-07-25 10:09:18.784481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.874 [2024-07-25 10:09:18.784486] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784493] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.874 [2024-07-25 10:09:18.784498] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.874 [2024-07-25 10:09:18.784518] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784527] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784534] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784539] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784558] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784566] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784576] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784598] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784607] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784614] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784641] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784650] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784656] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784684] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784692] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784699] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784723] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784732] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784739] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784745] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784766] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784775] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784782] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784806] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784816] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784823] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784847] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784856] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784862] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784891] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784900] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784907] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784935] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784944] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784951] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.784980] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.784984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.784989] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.784995] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.785021] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.785025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.785030] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785036] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.785064] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.785068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.785074] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785080] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.785105] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.785109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.785114] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.785120] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.789133] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.789143] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.789147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.789151] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.789158] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.789164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.875 [2024-07-25 10:09:18.789183] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.875 [2024-07-25 10:09:18.789187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0018 p:0 m:0 dnr:0 00:20:33.875 [2024-07-25 10:09:18.789192] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.875 [2024-07-25 10:09:18.789197] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:20:33.875 128 00:20:33.875 Transport Service Identifier: 4420 00:20:33.875 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:33.875 Transport Address: 192.168.100.8 00:20:33.875 Transport Specific Address Subtype - RDMA 00:20:33.875 RDMA QP Service Type: 1 (Reliable Connected) 00:20:33.875 RDMA Provider Type: 1 (No provider specified) 00:20:33.876 RDMA CM Service: 1 (RDMA_CM) 00:20:33.876 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:33.876 [2024-07-25 10:09:18.856225] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:33.876 [2024-07-25 10:09:18.856257] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610849 ] 00:20:33.876 EAL: No free 2048 kB hugepages reported on node 1 00:20:33.876 [2024-07-25 10:09:18.896322] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:33.876 [2024-07-25 10:09:18.896391] nvme_rdma.c:2192:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:20:33.876 [2024-07-25 10:09:18.896405] nvme_rdma.c:1211:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:20:33.876 [2024-07-25 10:09:18.896408] nvme_rdma.c:1215:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:20:33.876 [2024-07-25 10:09:18.896430] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:33.876 [2024-07-25 10:09:18.906669] nvme_rdma.c: 430:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:20:33.876 [2024-07-25 10:09:18.916987] nvme_rdma.c:1100:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:33.876 [2024-07-25 10:09:18.916997] nvme_rdma.c:1105:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:20:33.876 [2024-07-25 10:09:18.917002] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917007] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917012] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917016] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917020] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917024] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917029] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917033] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917037] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917041] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917046] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917050] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917054] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917058] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917062] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917067] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917071] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917075] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917079] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917084] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917088] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917092] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917096] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917101] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917105] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917109] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917116] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917120] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917124] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917134] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917138] nvme_rdma.c: 888:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917142] nvme_rdma.c:1119:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:20:33.876 [2024-07-25 10:09:18.917146] nvme_rdma.c:1122:nvme_rdma_connect_established: *DEBUG*: rc =0 00:20:33.876 [2024-07-25 10:09:18.917149] nvme_rdma.c:1127:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:20:33.876 [2024-07-25 10:09:18.917159] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.917168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf1c0 len:0x400 key:0x182000 00:20:33.876 [2024-07-25 10:09:18.922133] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.876 [2024-07-25 10:09:18.922147] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922152] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:33.876 [2024-07-25 10:09:18.922157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:33.876 [2024-07-25 10:09:18.922162] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:33.876 [2024-07-25 10:09:18.922172] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922178] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.876 [2024-07-25 10:09:18.922195] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:20:33.876 [2024-07-25 10:09:18.922204] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:33.876 [2024-07-25 10:09:18.922208] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922213] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:33.876 [2024-07-25 10:09:18.922219] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.876 [2024-07-25 10:09:18.922245] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:20:33.876 [2024-07-25 10:09:18.922254] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:33.876 [2024-07-25 10:09:18.922258] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922263] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:33.876 [2024-07-25 10:09:18.922272] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.876 [2024-07-25 10:09:18.922300] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:33.876 [2024-07-25 10:09:18.922309] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:33.876 [2024-07-25 10:09:18.922313] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922320] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.876 [2024-07-25 10:09:18.922343] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:33.876 [2024-07-25 10:09:18.922351] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:33.876 [2024-07-25 10:09:18.922355] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:33.876 [2024-07-25 10:09:18.922359] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922364] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:33.876 [2024-07-25 10:09:18.922469] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:33.876 [2024-07-25 10:09:18.922472] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:33.876 [2024-07-25 10:09:18.922481] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.876 [2024-07-25 10:09:18.922487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.876 [2024-07-25 10:09:18.922506] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.876 [2024-07-25 10:09:18.922510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922515] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:33.877 [2024-07-25 10:09:18.922519] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922525] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.877 [2024-07-25 10:09:18.922548] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922556] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:33.877 [2024-07-25 10:09:18.922560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922565] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922570] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:33.877 [2024-07-25 10:09:18.922576] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922583] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:20:33.877 [2024-07-25 10:09:18.922633] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922643] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:33.877 [2024-07-25 10:09:18.922647] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:33.877 [2024-07-25 10:09:18.922651] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:33.877 [2024-07-25 10:09:18.922656] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:33.877 [2024-07-25 10:09:18.922660] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:33.877 [2024-07-25 10:09:18.922664] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922667] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922673] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922678] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.877 [2024-07-25 10:09:18.922706] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922716] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d04c0 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.877 [2024-07-25 10:09:18.922726] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0600 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.877 [2024-07-25 10:09:18.922736] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.877 [2024-07-25 10:09:18.922746] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.877 [2024-07-25 10:09:18.922756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922760] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922766] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922772] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922777] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.877 [2024-07-25 10:09:18.922793] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922802] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:33.877 [2024-07-25 10:09:18.922806] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922810] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922815] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922820] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922826] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.877 [2024-07-25 10:09:18.922854] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922911] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922917] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922924] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x182000 00:20:33.877 [2024-07-25 10:09:18.922956] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.922961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.922968] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:33.877 [2024-07-25 10:09:18.922977] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922981] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.922987] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.922995] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.923000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:20:33.877 [2024-07-25 10:09:18.923030] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.923034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.923044] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923049] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.923055] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923061] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.923067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x182000 00:20:33.877 [2024-07-25 10:09:18.923090] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.877 [2024-07-25 10:09:18.923094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:33.877 [2024-07-25 10:09:18.923101] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923105] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.877 [2024-07-25 10:09:18.923110] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923116] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923121] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:33.877 [2024-07-25 10:09:18.923126] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:33.878 [2024-07-25 10:09:18.923135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:33.878 [2024-07-25 10:09:18.923140] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:33.878 [2024-07-25 10:09:18.923143] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:33.878 [2024-07-25 10:09:18.923147] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:33.878 [2024-07-25 10:09:18.923158] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.878 [2024-07-25 10:09:18.923170] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:33.878 [2024-07-25 10:09:18.923182] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923193] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923199] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:0 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.878 [2024-07-25 10:09:18.923211] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923219] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923226] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923234] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923240] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923246] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:0 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.878 [2024-07-25 10:09:18.923267] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923276] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923282] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.878 [2024-07-25 10:09:18.923307] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923315] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923328] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0380 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x182000 00:20:33.878 [2024-07-25 10:09:18.923340] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d09c0 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x182000 00:20:33.878 [2024-07-25 10:09:18.923352] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b00 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x182000 00:20:33.878 [2024-07-25 10:09:18.923364] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x182000 00:20:33.878 [2024-07-25 10:09:18.923378] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923390] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923405] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923417] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923423] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923432] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.878 [2024-07-25 10:09:18.923446] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.878 [2024-07-25 10:09:18.923450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:33.878 [2024-07-25 10:09:18.923457] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.878 ===================================================== 00:20:33.878 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:33.878 ===================================================== 00:20:33.878 Controller Capabilities/Features 00:20:33.878 ================================ 00:20:33.878 Vendor ID: 8086 00:20:33.878 Subsystem Vendor ID: 8086 00:20:33.878 Serial Number: SPDK00000000000001 00:20:33.878 Model Number: SPDK bdev Controller 00:20:33.878 Firmware Version: 24.09 00:20:33.878 Recommended Arb Burst: 6 00:20:33.878 IEEE OUI Identifier: e4 d2 5c 00:20:33.878 Multi-path I/O 00:20:33.878 May have multiple subsystem ports: Yes 00:20:33.878 May have multiple controllers: Yes 00:20:33.878 Associated with SR-IOV VF: No 00:20:33.878 Max Data Transfer Size: 131072 00:20:33.878 Max Number of Namespaces: 32 00:20:33.878 Max Number of I/O Queues: 127 00:20:33.878 NVMe Specification Version (VS): 1.3 00:20:33.878 NVMe Specification Version (Identify): 1.3 00:20:33.878 Maximum Queue Entries: 128 00:20:33.878 Contiguous Queues Required: Yes 00:20:33.878 Arbitration Mechanisms Supported 00:20:33.878 Weighted Round Robin: Not Supported 00:20:33.878 Vendor Specific: Not Supported 00:20:33.878 Reset Timeout: 15000 ms 00:20:33.878 Doorbell Stride: 4 bytes 00:20:33.878 NVM Subsystem Reset: Not Supported 00:20:33.878 Command Sets Supported 00:20:33.878 NVM Command Set: Supported 00:20:33.878 Boot Partition: Not Supported 00:20:33.878 Memory Page Size Minimum: 4096 bytes 00:20:33.878 Memory Page Size Maximum: 4096 bytes 00:20:33.878 Persistent Memory Region: Not Supported 00:20:33.878 Optional Asynchronous Events Supported 00:20:33.878 Namespace Attribute Notices: Supported 00:20:33.878 Firmware Activation Notices: Not Supported 00:20:33.878 ANA Change Notices: Not Supported 00:20:33.878 PLE Aggregate Log Change Notices: Not Supported 00:20:33.878 LBA Status Info Alert Notices: Not Supported 00:20:33.878 EGE Aggregate Log Change Notices: Not Supported 00:20:33.878 Normal NVM Subsystem Shutdown event: Not Supported 00:20:33.878 Zone Descriptor Change Notices: Not Supported 00:20:33.878 Discovery Log Change Notices: Not Supported 00:20:33.878 Controller Attributes 00:20:33.878 128-bit Host Identifier: Supported 00:20:33.878 Non-Operational Permissive Mode: Not Supported 00:20:33.878 NVM Sets: Not Supported 00:20:33.878 Read Recovery Levels: Not Supported 00:20:33.878 Endurance Groups: Not Supported 00:20:33.878 Predictable Latency Mode: Not Supported 00:20:33.878 Traffic Based Keep ALive: Not Supported 00:20:33.878 Namespace Granularity: Not Supported 00:20:33.878 SQ Associations: Not Supported 00:20:33.878 UUID List: Not Supported 00:20:33.879 Multi-Domain Subsystem: Not Supported 00:20:33.879 Fixed Capacity Management: Not Supported 00:20:33.879 Variable Capacity Management: Not Supported 00:20:33.879 Delete Endurance Group: Not Supported 00:20:33.879 Delete NVM Set: Not Supported 00:20:33.879 Extended LBA Formats Supported: Not Supported 00:20:33.879 Flexible Data Placement Supported: Not Supported 00:20:33.879 00:20:33.879 Controller Memory Buffer Support 00:20:33.879 ================================ 00:20:33.879 Supported: No 00:20:33.879 00:20:33.879 Persistent Memory Region Support 00:20:33.879 ================================ 00:20:33.879 Supported: No 00:20:33.879 00:20:33.879 Admin Command Set Attributes 00:20:33.879 ============================ 00:20:33.879 Security Send/Receive: Not Supported 00:20:33.879 Format NVM: Not Supported 00:20:33.879 Firmware Activate/Download: Not Supported 00:20:33.879 Namespace Management: Not Supported 00:20:33.879 Device Self-Test: Not Supported 00:20:33.879 Directives: Not Supported 00:20:33.879 NVMe-MI: Not Supported 00:20:33.879 Virtualization Management: Not Supported 00:20:33.879 Doorbell Buffer Config: Not Supported 00:20:33.879 Get LBA Status Capability: Not Supported 00:20:33.879 Command & Feature Lockdown Capability: Not Supported 00:20:33.879 Abort Command Limit: 4 00:20:33.879 Async Event Request Limit: 4 00:20:33.879 Number of Firmware Slots: N/A 00:20:33.879 Firmware Slot 1 Read-Only: N/A 00:20:33.879 Firmware Activation Without Reset: N/A 00:20:33.879 Multiple Update Detection Support: N/A 00:20:33.879 Firmware Update Granularity: No Information Provided 00:20:33.879 Per-Namespace SMART Log: No 00:20:33.879 Asymmetric Namespace Access Log Page: Not Supported 00:20:33.879 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:33.879 Command Effects Log Page: Supported 00:20:33.879 Get Log Page Extended Data: Supported 00:20:33.879 Telemetry Log Pages: Not Supported 00:20:33.879 Persistent Event Log Pages: Not Supported 00:20:33.879 Supported Log Pages Log Page: May Support 00:20:33.879 Commands Supported & Effects Log Page: Not Supported 00:20:33.879 Feature Identifiers & Effects Log Page:May Support 00:20:33.879 NVMe-MI Commands & Effects Log Page: May Support 00:20:33.879 Data Area 4 for Telemetry Log: Not Supported 00:20:33.879 Error Log Page Entries Supported: 128 00:20:33.879 Keep Alive: Supported 00:20:33.879 Keep Alive Granularity: 10000 ms 00:20:33.879 00:20:33.879 NVM Command Set Attributes 00:20:33.879 ========================== 00:20:33.879 Submission Queue Entry Size 00:20:33.879 Max: 64 00:20:33.879 Min: 64 00:20:33.879 Completion Queue Entry Size 00:20:33.879 Max: 16 00:20:33.879 Min: 16 00:20:33.879 Number of Namespaces: 32 00:20:33.879 Compare Command: Supported 00:20:33.879 Write Uncorrectable Command: Not Supported 00:20:33.879 Dataset Management Command: Supported 00:20:33.879 Write Zeroes Command: Supported 00:20:33.879 Set Features Save Field: Not Supported 00:20:33.879 Reservations: Supported 00:20:33.879 Timestamp: Not Supported 00:20:33.879 Copy: Supported 00:20:33.879 Volatile Write Cache: Present 00:20:33.879 Atomic Write Unit (Normal): 1 00:20:33.879 Atomic Write Unit (PFail): 1 00:20:33.879 Atomic Compare & Write Unit: 1 00:20:33.879 Fused Compare & Write: Supported 00:20:33.879 Scatter-Gather List 00:20:33.879 SGL Command Set: Supported 00:20:33.879 SGL Keyed: Supported 00:20:33.879 SGL Bit Bucket Descriptor: Not Supported 00:20:33.879 SGL Metadata Pointer: Not Supported 00:20:33.879 Oversized SGL: Not Supported 00:20:33.879 SGL Metadata Address: Not Supported 00:20:33.879 SGL Offset: Supported 00:20:33.879 Transport SGL Data Block: Not Supported 00:20:33.879 Replay Protected Memory Block: Not Supported 00:20:33.879 00:20:33.879 Firmware Slot Information 00:20:33.879 ========================= 00:20:33.879 Active slot: 1 00:20:33.879 Slot 1 Firmware Revision: 24.09 00:20:33.879 00:20:33.879 00:20:33.879 Commands Supported and Effects 00:20:33.879 ============================== 00:20:33.879 Admin Commands 00:20:33.879 -------------- 00:20:33.879 Get Log Page (02h): Supported 00:20:33.879 Identify (06h): Supported 00:20:33.879 Abort (08h): Supported 00:20:33.879 Set Features (09h): Supported 00:20:33.879 Get Features (0Ah): Supported 00:20:33.879 Asynchronous Event Request (0Ch): Supported 00:20:33.879 Keep Alive (18h): Supported 00:20:33.879 I/O Commands 00:20:33.879 ------------ 00:20:33.879 Flush (00h): Supported LBA-Change 00:20:33.879 Write (01h): Supported LBA-Change 00:20:33.879 Read (02h): Supported 00:20:33.879 Compare (05h): Supported 00:20:33.879 Write Zeroes (08h): Supported LBA-Change 00:20:33.879 Dataset Management (09h): Supported LBA-Change 00:20:33.879 Copy (19h): Supported LBA-Change 00:20:33.879 00:20:33.879 Error Log 00:20:33.879 ========= 00:20:33.879 00:20:33.879 Arbitration 00:20:33.879 =========== 00:20:33.879 Arbitration Burst: 1 00:20:33.879 00:20:33.879 Power Management 00:20:33.879 ================ 00:20:33.879 Number of Power States: 1 00:20:33.879 Current Power State: Power State #0 00:20:33.879 Power State #0: 00:20:33.879 Max Power: 0.00 W 00:20:33.879 Non-Operational State: Operational 00:20:33.879 Entry Latency: Not Reported 00:20:33.879 Exit Latency: Not Reported 00:20:33.879 Relative Read Throughput: 0 00:20:33.879 Relative Read Latency: 0 00:20:33.879 Relative Write Throughput: 0 00:20:33.879 Relative Write Latency: 0 00:20:33.879 Idle Power: Not Reported 00:20:33.879 Active Power: Not Reported 00:20:33.879 Non-Operational Permissive Mode: Not Supported 00:20:33.879 00:20:33.879 Health Information 00:20:33.879 ================== 00:20:33.879 Critical Warnings: 00:20:33.879 Available Spare Space: OK 00:20:33.879 Temperature: OK 00:20:33.879 Device Reliability: OK 00:20:33.879 Read Only: No 00:20:33.879 Volatile Memory Backup: OK 00:20:33.879 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:33.879 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:33.879 Available Spare: 0% 00:20:33.879 Available Spare Threshold: 0% 00:20:33.879 Life Percentage [2024-07-25 10:09:18.923528] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0c40 length 0x40 lkey 0x182000 00:20:33.879 [2024-07-25 10:09:18.923535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.879 [2024-07-25 10:09:18.923551] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.879 [2024-07-25 10:09:18.923555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923559] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.879 [2024-07-25 10:09:18.923584] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:33.879 [2024-07-25 10:09:18.923591] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59622 doesn't match qid 00:20:33.879 [2024-07-25 10:09:18.923603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32624 cdw0:5 sqhd:0f40 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923607] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59622 doesn't match qid 00:20:33.879 [2024-07-25 10:09:18.923613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32624 cdw0:5 sqhd:0f40 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923617] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59622 doesn't match qid 00:20:33.879 [2024-07-25 10:09:18.923623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32624 cdw0:5 sqhd:0f40 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923628] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 59622 doesn't match qid 00:20:33.879 [2024-07-25 10:09:18.923633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32624 cdw0:5 sqhd:0f40 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923640] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0880 length 0x40 lkey 0x182000 00:20:33.879 [2024-07-25 10:09:18.923646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.879 [2024-07-25 10:09:18.923663] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.879 [2024-07-25 10:09:18.923669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923676] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.879 [2024-07-25 10:09:18.923682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.879 [2024-07-25 10:09:18.923686] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.879 [2024-07-25 10:09:18.923710] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.879 [2024-07-25 10:09:18.923714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:33.879 [2024-07-25 10:09:18.923719] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:33.880 [2024-07-25 10:09:18.923723] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:33.880 [2024-07-25 10:09:18.923727] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923733] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923759] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923767] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923774] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923780] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923797] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923806] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923813] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923834] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923843] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923850] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923877] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923886] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923892] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923919] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923928] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923935] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923962] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.923966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.923970] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923977] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.923983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.923998] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924007] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924013] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924039] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924047] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924054] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924083] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924091] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924098] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924104] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924120] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924134] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924142] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924165] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924173] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924180] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924186] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924206] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924214] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924221] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924246] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924255] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924262] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924287] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924296] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924302] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924325] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924333] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924340] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924345] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924364] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924372] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924380] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.880 [2024-07-25 10:09:18.924404] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.880 [2024-07-25 10:09:18.924408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:33.880 [2024-07-25 10:09:18.924413] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924419] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.880 [2024-07-25 10:09:18.924425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924441] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924450] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924456] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924485] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924494] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924500] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924522] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924531] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924537] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924558] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924567] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924573] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924594] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924603] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924610] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924616] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924636] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924644] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924651] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924679] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924688] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924694] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924720] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924728] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924735] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924760] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924768] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924775] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924798] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924806] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924813] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924836] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924846] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924853] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924877] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924885] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924892] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924917] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924925] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924932] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.924959] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.924963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.924968] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa28 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924974] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.924980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.925000] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.925004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.925008] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa50 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925015] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.925042] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.925051] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa78 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925058] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925063] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.925080] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.925085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:20:33.881 [2024-07-25 10:09:18.925089] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa0 length 0x10 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925096] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.881 [2024-07-25 10:09:18.925102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.881 [2024-07-25 10:09:18.925120] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.881 [2024-07-25 10:09:18.925124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925133] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfac8 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925140] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925146] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925164] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925172] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf0 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925179] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925200] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925209] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf640 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925216] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925241] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925250] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf668 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925256] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925282] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925290] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf690 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925297] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925302] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925324] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925333] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6b8 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925340] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925346] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925364] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925372] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e0 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925379] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925404] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925413] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf708 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925419] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925440] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925449] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf730 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925455] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925481] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925489] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf758 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925496] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925521] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925529] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf780 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925536] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925559] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925568] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7a8 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925574] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925600] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925608] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d0 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925615] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925640] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925649] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7f8 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925655] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925661] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925678] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925686] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf820 length 0x10 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925693] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.882 [2024-07-25 10:09:18.925698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.882 [2024-07-25 10:09:18.925714] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.882 [2024-07-25 10:09:18.925718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:20:33.882 [2024-07-25 10:09:18.925722] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf848 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925729] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925734] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925752] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925761] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf870 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925767] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925791] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925799] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf898 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925806] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925832] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925840] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c0 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925846] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925872] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925880] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8e8 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925887] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925915] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925923] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf910 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925929] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925935] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925953] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.925962] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf938 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925968] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.925974] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.925992] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.925996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.926001] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf960 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926007] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.926034] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.926038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.926042] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf988 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926049] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.926074] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.926078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.926083] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b0 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926089] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.926095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.926118] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.926122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.926126] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9d8 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.930143] nvme_rdma.c:2271:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0740 length 0x40 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.930150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:33.883 [2024-07-25 10:09:18.930172] nvme_rdma.c:2474:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:20:33.883 [2024-07-25 10:09:18.930176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0017 p:0 m:0 dnr:0 00:20:33.883 [2024-07-25 10:09:18.930183] nvme_rdma.c:2367:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa00 length 0x10 lkey 0x182000 00:20:33.883 [2024-07-25 10:09:18.930188] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:33.883 Used: 0% 00:20:33.883 Data Units Read: 0 00:20:33.883 Data Units Written: 0 00:20:33.883 Host Read Commands: 0 00:20:33.883 Host Write Commands: 0 00:20:33.883 Controller Busy Time: 0 minutes 00:20:33.883 Power Cycles: 0 00:20:33.883 Power On Hours: 0 hours 00:20:33.883 Unsafe Shutdowns: 0 00:20:33.883 Unrecoverable Media Errors: 0 00:20:33.883 Lifetime Error Log Entries: 0 00:20:33.883 Warning Temperature Time: 0 minutes 00:20:33.883 Critical Temperature Time: 0 minutes 00:20:33.883 00:20:33.883 Number of Queues 00:20:33.883 ================ 00:20:33.883 Number of I/O Submission Queues: 127 00:20:33.883 Number of I/O Completion Queues: 127 00:20:33.883 00:20:33.883 Active Namespaces 00:20:33.883 ================= 00:20:33.883 Namespace ID:1 00:20:33.883 Error Recovery Timeout: Unlimited 00:20:33.883 Command Set Identifier: NVM (00h) 00:20:33.883 Deallocate: Supported 00:20:33.883 Deallocated/Unwritten Error: Not Supported 00:20:33.883 Deallocated Read Value: Unknown 00:20:33.883 Deallocate in Write Zeroes: Not Supported 00:20:33.883 Deallocated Guard Field: 0xFFFF 00:20:33.883 Flush: Supported 00:20:33.883 Reservation: Supported 00:20:33.883 Namespace Sharing Capabilities: Multiple Controllers 00:20:33.883 Size (in LBAs): 131072 (0GiB) 00:20:33.883 Capacity (in LBAs): 131072 (0GiB) 00:20:33.883 Utilization (in LBAs): 131072 (0GiB) 00:20:33.883 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:33.883 EUI64: ABCDEF0123456789 00:20:33.883 UUID: 64175a12-3b37-4473-9fc2-b733ef114a87 00:20:33.883 Thin Provisioning: Not Supported 00:20:33.883 Per-NS Atomic Units: Yes 00:20:33.883 Atomic Boundary Size (Normal): 0 00:20:33.883 Atomic Boundary Size (PFail): 0 00:20:33.883 Atomic Boundary Offset: 0 00:20:33.883 Maximum Single Source Range Length: 65535 00:20:33.883 Maximum Copy Length: 65535 00:20:33.883 Maximum Source Range Count: 1 00:20:33.883 NGUID/EUI64 Never Reused: No 00:20:33.883 Namespace Write Protected: No 00:20:33.883 Number of LBA Formats: 1 00:20:33.883 Current LBA Format: LBA Format #00 00:20:33.883 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:33.883 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:20:33.883 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:33.884 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:33.884 10:09:18 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:20:33.884 rmmod nvme_rdma 00:20:33.884 rmmod nvme_fabrics 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2610648 ']' 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2610648 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2610648 ']' 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2610648 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2610648 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2610648' 00:20:34.142 killing process with pid 2610648 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2610648 00:20:34.142 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2610648 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:20:34.401 00:20:34.401 real 0m7.552s 00:20:34.401 user 0m7.991s 00:20:34.401 sys 0m4.646s 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:34.401 ************************************ 00:20:34.401 END TEST nvmf_identify 00:20:34.401 ************************************ 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.401 ************************************ 00:20:34.401 START TEST nvmf_perf 00:20:34.401 ************************************ 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:20:34.401 * Looking for test storage... 00:20:34.401 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.401 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.402 10:09:19 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.969 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:20:40.970 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:20:40.970 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:20:40.970 Found net devices under 0000:da:00.0: mlx_0_0 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:20:40.970 Found net devices under 0000:da:00.1: mlx_0_1 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@420 -- # rdma_device_init 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # uname 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@63 -- # modprobe ib_core 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:20:40.970 10:09:24 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:20:40.970 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.970 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:20:40.970 altname enp218s0f0np0 00:20:40.970 altname ens818f0np0 00:20:40.970 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.970 valid_lft forever preferred_lft forever 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:20:40.970 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.970 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:20:40.970 altname enp218s0f1np1 00:20:40.970 altname ens818f1np1 00:20:40.970 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.970 valid_lft forever preferred_lft forever 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.970 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # continue 2 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.971 192.168.100.9' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:20:40.971 192.168.100.9' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # head -n 1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:20:40.971 192.168.100.9' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # tail -n +2 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # head -n 1 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2613969 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2613969 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2613969 ']' 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:40.971 10:09:25 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.971 [2024-07-25 10:09:25.225062] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:40.971 [2024-07-25 10:09:25.225107] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.971 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.971 [2024-07-25 10:09:25.293111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.971 [2024-07-25 10:09:25.371785] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.971 [2024-07-25 10:09:25.371821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.971 [2024-07-25 10:09:25.371828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.971 [2024-07-25 10:09:25.371833] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.971 [2024-07-25 10:09:25.371838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.971 [2024-07-25 10:09:25.371894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.971 [2024-07-25 10:09:25.372002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.971 [2024-07-25 10:09:25.372105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.971 [2024-07-25 10:09:25.372107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:40.971 10:09:26 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:44.253 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:44.253 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:44.253 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:20:44.253 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:44.511 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:44.511 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:20:44.511 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:44.511 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:20:44.511 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:20:44.511 [2024-07-25 10:09:29.621095] rdma.c:2731:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:20:44.511 [2024-07-25 10:09:29.640873] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x7f4190/0x801d00) succeed. 00:20:44.511 [2024-07-25 10:09:29.650124] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x7f57d0/0x881d40) succeed. 00:20:44.769 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:45.027 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:45.027 10:09:29 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:45.027 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:45.027 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:45.285 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:45.543 [2024-07-25 10:09:30.489211] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:45.543 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:20:45.543 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:20:45.543 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:45.543 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:45.543 10:09:30 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:20:46.986 Initializing NVMe Controllers 00:20:46.986 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:20:46.986 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:20:46.986 Initialization complete. Launching workers. 00:20:46.986 ======================================================== 00:20:46.986 Latency(us) 00:20:46.986 Device Information : IOPS MiB/s Average min max 00:20:46.986 PCIE (0000:5e:00.0) NSID 1 from core 0: 98825.42 386.04 323.34 29.39 4440.61 00:20:46.986 ======================================================== 00:20:46.986 Total : 98825.42 386.04 323.34 29.39 4440.61 00:20:46.986 00:20:46.986 10:09:31 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:46.986 EAL: No free 2048 kB hugepages reported on node 1 00:20:50.270 Initializing NVMe Controllers 00:20:50.270 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:50.270 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:50.270 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:50.270 Initialization complete. Launching workers. 00:20:50.270 ======================================================== 00:20:50.270 Latency(us) 00:20:50.270 Device Information : IOPS MiB/s Average min max 00:20:50.270 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6670.00 26.05 149.72 48.35 7071.78 00:20:50.270 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5222.00 20.40 191.30 68.28 7065.57 00:20:50.270 ======================================================== 00:20:50.270 Total : 11892.00 46.45 167.98 48.35 7071.78 00:20:50.270 00:20:50.270 10:09:35 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:50.270 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.550 Initializing NVMe Controllers 00:20:53.550 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.550 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:53.550 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:53.550 Initialization complete. Launching workers. 00:20:53.550 ======================================================== 00:20:53.550 Latency(us) 00:20:53.550 Device Information : IOPS MiB/s Average min max 00:20:53.550 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18055.16 70.53 1772.10 501.94 5545.87 00:20:53.550 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3997.82 15.62 8003.78 5750.72 10173.95 00:20:53.550 ======================================================== 00:20:53.550 Total : 22052.98 86.14 2901.80 501.94 10173.95 00:20:53.550 00:20:53.550 10:09:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:20:53.550 10:09:38 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:20:53.550 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.816 Initializing NVMe Controllers 00:20:58.816 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.816 Controller IO queue size 128, less than required. 00:20:58.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.816 Controller IO queue size 128, less than required. 00:20:58.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:58.816 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:58.816 Initialization complete. Launching workers. 00:20:58.816 ======================================================== 00:20:58.816 Latency(us) 00:20:58.816 Device Information : IOPS MiB/s Average min max 00:20:58.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3898.99 974.75 32980.81 14887.80 73268.35 00:20:58.816 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4086.44 1021.61 31100.10 14183.74 48782.18 00:20:58.816 ======================================================== 00:20:58.816 Total : 7985.43 1996.36 32018.38 14183.74 73268.35 00:20:58.816 00:20:58.816 10:09:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:20:58.816 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.816 No valid NVMe controllers or AIO or URING devices found 00:20:58.816 Initializing NVMe Controllers 00:20:58.816 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:20:58.816 Controller IO queue size 128, less than required. 00:20:58.816 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.816 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:58.816 Controller IO queue size 128, less than required. 00:20:58.817 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:58.817 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:58.817 WARNING: Some requested NVMe devices were skipped 00:20:58.817 10:09:43 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:20:58.817 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.001 Initializing NVMe Controllers 00:21:03.001 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.001 Controller IO queue size 128, less than required. 00:21:03.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.001 Controller IO queue size 128, less than required. 00:21:03.001 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:03.001 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.001 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:03.001 Initialization complete. Launching workers. 00:21:03.001 00:21:03.001 ==================== 00:21:03.001 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:03.001 RDMA transport: 00:21:03.001 dev name: mlx5_0 00:21:03.001 polls: 400082 00:21:03.001 idle_polls: 396587 00:21:03.001 completions: 43826 00:21:03.001 queued_requests: 1 00:21:03.001 total_send_wrs: 21913 00:21:03.001 send_doorbell_updates: 3230 00:21:03.001 total_recv_wrs: 22040 00:21:03.001 recv_doorbell_updates: 3232 00:21:03.001 --------------------------------- 00:21:03.001 00:21:03.001 ==================== 00:21:03.001 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:03.001 RDMA transport: 00:21:03.001 dev name: mlx5_0 00:21:03.001 polls: 402254 00:21:03.001 idle_polls: 401978 00:21:03.001 completions: 20446 00:21:03.001 queued_requests: 1 00:21:03.001 total_send_wrs: 10223 00:21:03.001 send_doorbell_updates: 256 00:21:03.001 total_recv_wrs: 10350 00:21:03.001 recv_doorbell_updates: 257 00:21:03.001 --------------------------------- 00:21:03.001 ======================================================== 00:21:03.001 Latency(us) 00:21:03.001 Device Information : IOPS MiB/s Average min max 00:21:03.001 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5478.00 1369.50 23430.61 10678.34 53810.48 00:21:03.001 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2555.50 638.88 50186.30 24044.67 75934.75 00:21:03.001 ======================================================== 00:21:03.001 Total : 8033.50 2008.38 31941.74 10678.34 75934.75 00:21:03.001 00:21:03.001 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:03.002 rmmod nvme_rdma 00:21:03.002 rmmod nvme_fabrics 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2613969 ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2613969 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2613969 ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2613969 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:03.002 10:09:47 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2613969 00:21:03.002 10:09:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:03.002 10:09:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:03.002 10:09:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2613969' 00:21:03.002 killing process with pid 2613969 00:21:03.002 10:09:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2613969 00:21:03.002 10:09:48 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2613969 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:05.534 00:21:05.534 real 0m30.639s 00:21:05.534 user 1m40.896s 00:21:05.534 sys 0m5.198s 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:05.534 ************************************ 00:21:05.534 END TEST nvmf_perf 00:21:05.534 ************************************ 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.534 ************************************ 00:21:05.534 START TEST nvmf_fio_host 00:21:05.534 ************************************ 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:21:05.534 * Looking for test storage... 00:21:05.534 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.534 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:05.535 10:09:50 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.809 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:10.810 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:10.810 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:10.810 Found net devices under 0000:da:00.0: mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:10.810 Found net devices under 0000:da:00.1: mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@420 -- # rdma_device_init 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # uname 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:10.810 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:10.810 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:10.810 altname enp218s0f0np0 00:21:10.810 altname ens818f0np0 00:21:10.810 inet 192.168.100.8/24 scope global mlx_0_0 00:21:10.810 valid_lft forever preferred_lft forever 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:10.810 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:10.810 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:10.810 altname enp218s0f1np1 00:21:10.810 altname ens818f1np1 00:21:10.810 inet 192.168.100.9/24 scope global mlx_0_1 00:21:10.810 valid_lft forever preferred_lft forever 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # continue 2 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:10.810 192.168.100.9' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:10.810 192.168.100.9' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # head -n 1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:10.810 192.168.100.9' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # tail -n +2 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # head -n 1 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2621198 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2621198 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2621198 ']' 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.810 10:09:55 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.069 [2024-07-25 10:09:55.975775] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:11.069 [2024-07-25 10:09:55.975822] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:11.069 EAL: No free 2048 kB hugepages reported on node 1 00:21:11.069 [2024-07-25 10:09:56.043136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:11.069 [2024-07-25 10:09:56.116788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:11.069 [2024-07-25 10:09:56.116829] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:11.069 [2024-07-25 10:09:56.116836] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:11.069 [2024-07-25 10:09:56.116843] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:11.069 [2024-07-25 10:09:56.116847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:11.069 [2024-07-25 10:09:56.116907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:11.069 [2024-07-25 10:09:56.117015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:11.069 [2024-07-25 10:09:56.117100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:11.069 [2024-07-25 10:09:56.117099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:11.634 10:09:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:11.634 10:09:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:21:11.634 10:09:56 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:11.892 [2024-07-25 10:09:56.959148] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd9dcc0/0xda21b0) succeed. 00:21:11.892 [2024-07-25 10:09:56.968136] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd9f300/0xde3840) succeed. 00:21:12.150 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:12.150 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:12.150 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.150 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:12.409 Malloc1 00:21:12.409 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.409 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:12.668 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:12.668 [2024-07-25 10:09:57.813577] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:12.926 10:09:57 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:12.926 10:09:58 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:21:13.184 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:13.184 fio-3.35 00:21:13.184 Starting 1 thread 00:21:13.441 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.978 00:21:15.978 test: (groupid=0, jobs=1): err= 0: pid=2621712: Thu Jul 25 10:10:00 2024 00:21:15.979 read: IOPS=17.4k, BW=67.9MiB/s (71.2MB/s)(136MiB/2004msec) 00:21:15.979 slat (nsec): min=1377, max=28227, avg=1516.43, stdev=403.66 00:21:15.979 clat (usec): min=1831, max=6774, avg=3650.38, stdev=124.58 00:21:15.979 lat (usec): min=1847, max=6776, avg=3651.90, stdev=124.54 00:21:15.979 clat percentiles (usec): 00:21:15.979 | 1.00th=[ 3621], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:15.979 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3654], 00:21:15.979 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:21:15.979 | 99.00th=[ 3851], 99.50th=[ 4293], 99.90th=[ 5538], 99.95th=[ 5669], 00:21:15.979 | 99.99th=[ 6718] 00:21:15.979 bw ( KiB/s): min=68280, max=70544, per=100.00%, avg=69604.00, stdev=961.79, samples=4 00:21:15.979 iops : min=17070, max=17636, avg=17401.00, stdev=240.45, samples=4 00:21:15.979 write: IOPS=17.4k, BW=68.0MiB/s (71.3MB/s)(136MiB/2004msec); 0 zone resets 00:21:15.979 slat (nsec): min=1409, max=17698, avg=1585.87, stdev=421.42 00:21:15.979 clat (usec): min=1855, max=6786, avg=3650.57, stdev=136.56 00:21:15.979 lat (usec): min=1863, max=6788, avg=3652.15, stdev=136.52 00:21:15.979 clat percentiles (usec): 00:21:15.979 | 1.00th=[ 3621], 5.00th=[ 3621], 10.00th=[ 3621], 20.00th=[ 3621], 00:21:15.979 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3654], 00:21:15.979 | 70.00th=[ 3654], 80.00th=[ 3654], 90.00th=[ 3654], 95.00th=[ 3654], 00:21:15.979 | 99.00th=[ 3851], 99.50th=[ 4686], 99.90th=[ 5669], 99.95th=[ 6259], 00:21:15.979 | 99.99th=[ 6783] 00:21:15.979 bw ( KiB/s): min=68448, max=70344, per=99.99%, avg=69668.00, stdev=868.69, samples=4 00:21:15.979 iops : min=17112, max=17586, avg=17417.00, stdev=217.17, samples=4 00:21:15.979 lat (msec) : 2=0.02%, 4=99.27%, 10=0.71% 00:21:15.979 cpu : usr=99.65%, sys=0.00%, ctx=14, majf=0, minf=4 00:21:15.979 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:15.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:15.979 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:15.979 issued rwts: total=34857,34906,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:15.979 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:15.979 00:21:15.979 Run status group 0 (all jobs): 00:21:15.979 READ: bw=67.9MiB/s (71.2MB/s), 67.9MiB/s-67.9MiB/s (71.2MB/s-71.2MB/s), io=136MiB (143MB), run=2004-2004msec 00:21:15.979 WRITE: bw=68.0MiB/s (71.3MB/s), 68.0MiB/s-68.0MiB/s (71.3MB/s-71.3MB/s), io=136MiB (143MB), run=2004-2004msec 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:15.979 10:10:00 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:21:15.979 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:15.979 fio-3.35 00:21:15.979 Starting 1 thread 00:21:15.979 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.538 00:21:18.538 test: (groupid=0, jobs=1): err= 0: pid=2622192: Thu Jul 25 10:10:03 2024 00:21:18.538 read: IOPS=14.1k, BW=221MiB/s (231MB/s)(435MiB/1971msec) 00:21:18.538 slat (nsec): min=2283, max=39417, avg=2730.73, stdev=1245.03 00:21:18.538 clat (usec): min=478, max=9764, avg=1682.49, stdev=1372.97 00:21:18.538 lat (usec): min=480, max=9769, avg=1685.22, stdev=1373.49 00:21:18.538 clat percentiles (usec): 00:21:18.538 | 1.00th=[ 701], 5.00th=[ 799], 10.00th=[ 857], 20.00th=[ 938], 00:21:18.538 | 30.00th=[ 1012], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1319], 00:21:18.538 | 70.00th=[ 1467], 80.00th=[ 1663], 90.00th=[ 4293], 95.00th=[ 5014], 00:21:18.538 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 8848], 99.95th=[ 9372], 00:21:18.538 | 99.99th=[ 9634] 00:21:18.538 bw ( KiB/s): min=109056, max=112448, per=49.13%, avg=110952.00, stdev=1577.09, samples=4 00:21:18.538 iops : min= 6816, max= 7028, avg=6934.50, stdev=98.57, samples=4 00:21:18.538 write: IOPS=8070, BW=126MiB/s (132MB/s)(226MiB/1794msec); 0 zone resets 00:21:18.538 slat (usec): min=27, max=113, avg=30.44, stdev= 7.14 00:21:18.538 clat (usec): min=4187, max=18163, avg=12817.23, stdev=1750.71 00:21:18.538 lat (usec): min=4217, max=18190, avg=12847.67, stdev=1750.35 00:21:18.538 clat percentiles (usec): 00:21:18.538 | 1.00th=[ 7504], 5.00th=[10290], 10.00th=[10814], 20.00th=[11469], 00:21:18.538 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13173], 00:21:18.538 | 70.00th=[13698], 80.00th=[14222], 90.00th=[14877], 95.00th=[15795], 00:21:18.538 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:21:18.538 | 99.99th=[17957] 00:21:18.538 bw ( KiB/s): min=112384, max=117056, per=88.98%, avg=114888.00, stdev=2312.63, samples=4 00:21:18.538 iops : min= 7024, max= 7316, avg=7180.50, stdev=144.54, samples=4 00:21:18.538 lat (usec) : 500=0.01%, 750=1.71%, 1000=17.19% 00:21:18.538 lat (msec) : 2=37.49%, 4=2.33%, 10=8.15%, 20=33.13% 00:21:18.538 cpu : usr=97.21%, sys=1.10%, ctx=186, majf=0, minf=3 00:21:18.538 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:21:18.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:18.538 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:18.538 issued rwts: total=27818,14478,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:18.538 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:18.538 00:21:18.538 Run status group 0 (all jobs): 00:21:18.538 READ: bw=221MiB/s (231MB/s), 221MiB/s-221MiB/s (231MB/s-231MB/s), io=435MiB (456MB), run=1971-1971msec 00:21:18.538 WRITE: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=226MiB (237MB), run=1794-1794msec 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:18.538 rmmod nvme_rdma 00:21:18.538 rmmod nvme_fabrics 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2621198 ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2621198 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2621198 ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2621198 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2621198 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2621198' 00:21:18.538 killing process with pid 2621198 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2621198 00:21:18.538 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2621198 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:18.798 00:21:18.798 real 0m13.743s 00:21:18.798 user 0m48.721s 00:21:18.798 sys 0m5.168s 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.798 ************************************ 00:21:18.798 END TEST nvmf_fio_host 00:21:18.798 ************************************ 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.798 ************************************ 00:21:18.798 START TEST nvmf_failover 00:21:18.798 ************************************ 00:21:18.798 10:10:03 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:21:19.057 * Looking for test storage... 00:21:19.057 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:19.057 10:10:04 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:21:24.331 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:21:24.332 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:21:24.332 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:21:24.332 Found net devices under 0000:da:00.0: mlx_0_0 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:21:24.332 Found net devices under 0000:da:00.1: mlx_0_1 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@420 -- # rdma_device_init 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # uname 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:21:24.332 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@62 -- # modprobe ib_cm 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@63 -- # modprobe ib_core 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@64 -- # modprobe ib_umad 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@66 -- # modprobe iw_cm 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # allocate_nic_ips 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # get_rdma_if_list 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:21:24.592 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:24.592 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:21:24.592 altname enp218s0f0np0 00:21:24.592 altname ens818f0np0 00:21:24.592 inet 192.168.100.8/24 scope global mlx_0_0 00:21:24.592 valid_lft forever preferred_lft forever 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:21:24.592 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:24.592 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:21:24.592 altname enp218s0f1np1 00:21:24.592 altname ens818f1np1 00:21:24.592 inet 192.168.100.9/24 scope global mlx_0_1 00:21:24.592 valid_lft forever preferred_lft forever 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # get_rdma_if_list 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # echo mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # continue 2 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # awk '{print $4}' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@113 -- # cut -d/ -f1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:21:24.592 192.168.100.9' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:21:24.592 192.168.100.9' 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # head -n 1 00:21:24.592 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:21:24.593 192.168.100.9' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # tail -n +2 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # head -n 1 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2625697 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2625697 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2625697 ']' 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.593 10:10:09 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.593 [2024-07-25 10:10:09.735844] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:24.593 [2024-07-25 10:10:09.735891] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.852 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.852 [2024-07-25 10:10:09.802814] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:24.852 [2024-07-25 10:10:09.882042] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.852 [2024-07-25 10:10:09.882078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.852 [2024-07-25 10:10:09.882085] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.852 [2024-07-25 10:10:09.882092] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.852 [2024-07-25 10:10:09.882096] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.852 [2024-07-25 10:10:09.882206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.852 [2024-07-25 10:10:09.882313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.852 [2024-07-25 10:10:09.882314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.419 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:25.419 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:25.419 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.419 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:25.419 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:25.678 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.678 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:25.678 [2024-07-25 10:10:10.773458] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x214f200/0x21536f0) succeed. 00:21:25.678 [2024-07-25 10:10:10.782363] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21507a0/0x2194d80) succeed. 00:21:25.935 10:10:10 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:25.935 Malloc0 00:21:25.935 10:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:26.194 10:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.452 10:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:26.709 [2024-07-25 10:10:11.626675] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:26.709 10:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:26.709 [2024-07-25 10:10:11.811074] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:26.709 10:10:11 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:26.968 [2024-07-25 10:10:11.979674] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2626153 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2626153 /var/tmp/bdevperf.sock 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2626153 ']' 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:26.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.968 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.902 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.902 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:27.903 10:10:12 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.160 NVMe0n1 00:21:28.160 10:10:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:28.420 00:21:28.420 10:10:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2626381 00:21:28.420 10:10:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:28.420 10:10:13 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:29.355 10:10:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:29.613 10:10:14 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:32.895 10:10:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.895 00:21:32.895 10:10:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:32.895 10:10:17 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:36.179 10:10:20 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:36.179 [2024-07-25 10:10:21.156729] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:36.179 10:10:21 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:37.113 10:10:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:37.372 10:10:22 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2626381 00:21:43.978 0 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2626153 ']' 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2626153' 00:21:43.978 killing process with pid 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2626153 00:21:43.978 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:43.978 [2024-07-25 10:10:12.039313] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:43.978 [2024-07-25 10:10:12.039361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2626153 ] 00:21:43.978 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.978 [2024-07-25 10:10:12.107402] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.978 [2024-07-25 10:10:12.182125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:43.978 Running I/O for 15 seconds... 00:21:43.978 [2024-07-25 10:10:15.526253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:21800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:21808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:21824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:21872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:21904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184500 00:21:43.978 [2024-07-25 10:10:15.526547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.978 [2024-07-25 10:10:15.526558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:21928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:21952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:21960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:21984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:22000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:22024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:22112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:22128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.526989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:22136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.526996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:22192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184500 00:21:43.979 [2024-07-25 10:10:15.527100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.979 [2024-07-25 10:10:15.527108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:22336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184500 00:21:43.980 [2024-07-25 10:10:15.527638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.980 [2024-07-25 10:10:15.527647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:22504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:22520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184500 00:21:43.981 [2024-07-25 10:10:15.527727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:22544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:22592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:22616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:22640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:22664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.527988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.527996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:22712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:22736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.981 [2024-07-25 10:10:15.528191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.981 [2024-07-25 10:10:15.528199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.536582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.982 [2024-07-25 10:10:15.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.536603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.982 [2024-07-25 10:10:15.536610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.538439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.982 [2024-07-25 10:10:15.538454] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.982 [2024-07-25 10:10:15.538463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22800 len:8 PRP1 0x0 PRP2 0x0 00:21:43.982 [2024-07-25 10:10:15.538472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.538514] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:21:43.982 [2024-07-25 10:10:15.538525] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:43.982 [2024-07-25 10:10:15.538541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.982 [2024-07-25 10:10:15.538580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.982 [2024-07-25 10:10:15.538592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.538602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.982 [2024-07-25 10:10:15.538611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.538621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.982 [2024-07-25 10:10:15.538630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.538640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.982 [2024-07-25 10:10:15.538649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:15.556912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:43.982 [2024-07-25 10:10:15.556931] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:43.982 [2024-07-25 10:10:15.556939] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.982 [2024-07-25 10:10:15.559782] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.982 [2024-07-25 10:10:15.605467] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:43.982 [2024-07-25 10:10:18.964516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:103072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:103120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:103136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:103160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:103192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:103208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.982 [2024-07-25 10:10:18.964840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184500 00:21:43.982 [2024-07-25 10:10:18.964849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.964865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:103232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.964880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.964896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.964985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.964991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:103272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:103288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:103824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:103840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.983 [2024-07-25 10:10:18.965258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:103344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.983 [2024-07-25 10:10:18.965403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x184500 00:21:43.983 [2024-07-25 10:10:18.965410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:103392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:103912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:103920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:103928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184500 00:21:43.984 [2024-07-25 10:10:18.965855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:103984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:104000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.984 [2024-07-25 10:10:18.965958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.984 [2024-07-25 10:10:18.965965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.965972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.965980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.965986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.965994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:103584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:103592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:103600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:103608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:104016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:104024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:104032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:104040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:104048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:104056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:104064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:104072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.966223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:103656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:103752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x184500 00:21:43.985 [2024-07-25 10:10:18.966467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.966476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:104080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.985 [2024-07-25 10:10:18.975569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.977446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.985 [2024-07-25 10:10:18.977461] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.985 [2024-07-25 10:10:18.977471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:104088 len:8 PRP1 0x0 PRP2 0x0 00:21:43.985 [2024-07-25 10:10:18.977480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.985 [2024-07-25 10:10:18.977521] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:21:43.985 [2024-07-25 10:10:18.977534] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:21:43.985 [2024-07-25 10:10:18.977544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.985 [2024-07-25 10:10:18.977579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.985 [2024-07-25 10:10:18.977595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:18.977605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.986 [2024-07-25 10:10:18.977614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:18.977623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.986 [2024-07-25 10:10:18.977632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:18.977641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:21:43.986 [2024-07-25 10:10:18.977650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:18.995989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:43.986 [2024-07-25 10:10:18.996005] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:21:43.986 [2024-07-25 10:10:18.996012] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:43.986 [2024-07-25 10:10:18.998803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.986 [2024-07-25 10:10:19.041899] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:43.986 [2024-07-25 10:10:23.363056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:66800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:66840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:66856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:66880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:66928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:66944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:66952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:67000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:67008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:67016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:67024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:67032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x184500 00:21:43.986 [2024-07-25 10:10:23.363580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.986 [2024-07-25 10:10:23.363588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:67048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:67064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:67072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.363760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.363990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.987 [2024-07-25 10:10:23.363997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.364005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:67096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.364012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.364020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:67104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007592000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.364027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.364036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.364042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.364050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.364057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.987 [2024-07-25 10:10:23.364066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x184500 00:21:43.987 [2024-07-25 10:10:23.364073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:67136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:67144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:67152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:67160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:67168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:67176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:67184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:67192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:67200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:67208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:67216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:67224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:67232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:67240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:67248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:67264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:67272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:67280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-07-25 10:10:23.364377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-07-25 10:10:23.364392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.988 [2024-07-25 10:10:23.364406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:67288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:67296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:67304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:67336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:67344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:67352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:67360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:67368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:67376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:67384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184500 00:21:43.988 [2024-07-25 10:10:23.364618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.988 [2024-07-25 10:10:23.364627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184500 00:21:43.989 [2024-07-25 10:10:23.364633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184500 00:21:43.989 [2024-07-25 10:10:23.364648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:67736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:67760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:67776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:67792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.364988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.364995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:43.989 [2024-07-25 10:10:23.365001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.365010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:67416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x184500 00:21:43.989 [2024-07-25 10:10:23.365016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:f4534000 sqhd:52b0 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.366922] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:43.989 [2024-07-25 10:10:23.366935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:43.989 [2024-07-25 10:10:23.366941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67424 len:8 PRP1 0x0 PRP2 0x0 00:21:43.989 [2024-07-25 10:10:23.366948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:43.989 [2024-07-25 10:10:23.366987] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e48c0 was disconnected and freed. reset controller. 00:21:43.989 [2024-07-25 10:10:23.366996] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:21:43.989 [2024-07-25 10:10:23.367004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:43.989 [2024-07-25 10:10:23.369801] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:43.989 [2024-07-25 10:10:23.384013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:43.989 [2024-07-25 10:10:23.426051] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:43.989 00:21:43.989 Latency(us) 00:21:43.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.989 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:43.989 Verification LBA range: start 0x0 length 0x4000 00:21:43.989 NVMe0n1 : 15.00 14075.01 54.98 299.44 0.00 8882.86 356.94 1046578.71 00:21:43.989 =================================================================================================================== 00:21:43.989 Total : 14075.01 54.98 299.44 0.00 8882.86 356.94 1046578.71 00:21:43.989 Received shutdown signal, test time was about 15.000000 seconds 00:21:43.989 00:21:43.989 Latency(us) 00:21:43.989 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.989 =================================================================================================================== 00:21:43.989 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2628910 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2628910 /var/tmp/bdevperf.sock 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2628910 ']' 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.989 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.990 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.990 10:10:28 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:44.555 10:10:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:44.555 10:10:29 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:44.555 10:10:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:21:44.813 [2024-07-25 10:10:29.783015] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:21:44.813 10:10:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:21:44.813 [2024-07-25 10:10:29.951553] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:21:45.071 10:10:29 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.071 NVMe0n1 00:21:45.329 10:10:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.329 00:21:45.329 10:10:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:45.587 00:21:45.587 10:10:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.587 10:10:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:45.845 10:10:30 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:46.103 10:10:31 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:49.383 10:10:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:49.383 10:10:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:49.383 10:10:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2629842 00:21:49.383 10:10:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:49.383 10:10:34 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2629842 00:21:50.318 0 00:21:50.318 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:50.318 [2024-07-25 10:10:28.802031] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:50.318 [2024-07-25 10:10:28.802081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2628910 ] 00:21:50.318 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.318 [2024-07-25 10:10:28.869442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.318 [2024-07-25 10:10:28.937993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.318 [2024-07-25 10:10:31.093238] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:21:50.318 [2024-07-25 10:10:31.093813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:50.318 [2024-07-25 10:10:31.093843] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.318 [2024-07-25 10:10:31.114902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:21:50.318 [2024-07-25 10:10:31.130845] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:50.318 Running I/O for 1 seconds... 00:21:50.318 00:21:50.318 Latency(us) 00:21:50.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.318 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:50.318 Verification LBA range: start 0x0 length 0x4000 00:21:50.318 NVMe0n1 : 1.01 17794.87 69.51 0.00 0.00 7153.59 2793.08 13294.45 00:21:50.318 =================================================================================================================== 00:21:50.319 Total : 17794.87 69.51 0.00 0.00 7153.59 2793.08 13294.45 00:21:50.319 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:50.319 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.577 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:50.834 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:50.834 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:50.834 10:10:35 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:51.092 10:10:36 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2628910 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2628910 ']' 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2628910 00:21:54.371 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2628910 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2628910' 00:21:54.372 killing process with pid 2628910 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2628910 00:21:54.372 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2628910 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:54.629 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:21:54.629 rmmod nvme_rdma 00:21:54.629 rmmod nvme_fabrics 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2625697 ']' 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2625697 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2625697 ']' 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2625697 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2625697 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2625697' 00:21:54.887 killing process with pid 2625697 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2625697 00:21:54.887 10:10:39 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2625697 00:21:55.145 10:10:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:55.145 10:10:40 nvmf_rdma.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:21:55.146 00:21:55.146 real 0m36.165s 00:21:55.146 user 2m3.849s 00:21:55.146 sys 0m6.155s 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:55.146 ************************************ 00:21:55.146 END TEST nvmf_failover 00:21:55.146 ************************************ 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.146 ************************************ 00:21:55.146 START TEST nvmf_host_discovery 00:21:55.146 ************************************ 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:21:55.146 * Looking for test storage... 00:21:55.146 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:21:55.146 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- host/discovery.sh@13 -- # exit 0 00:21:55.146 00:21:55.146 real 0m0.115s 00:21:55.146 user 0m0.054s 00:21:55.146 sys 0m0.068s 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:55.146 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:55.146 ************************************ 00:21:55.146 END TEST nvmf_host_discovery 00:21:55.146 ************************************ 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:55.405 ************************************ 00:21:55.405 START TEST nvmf_host_multipath_status 00:21:55.405 ************************************ 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=rdma 00:21:55.405 * Looking for test storage... 00:21:55.405 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/bpftrace.sh 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.405 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:55.406 10:10:40 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:01.978 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:01.979 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:01.979 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:01.979 Found net devices under 0000:da:00.0: mlx_0_0 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:01.979 Found net devices under 0000:da:00.1: mlx_0_1 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@420 -- # rdma_device_init 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # uname 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:01.979 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:01.979 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:01.979 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:01.979 altname enp218s0f0np0 00:22:01.979 altname ens818f0np0 00:22:01.979 inet 192.168.100.8/24 scope global mlx_0_0 00:22:01.979 valid_lft forever preferred_lft forever 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.980 10:10:45 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:01.980 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:01.980 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:01.980 altname enp218s0f1np1 00:22:01.980 altname ens818f1np1 00:22:01.980 inet 192.168.100.9/24 scope global mlx_0_1 00:22:01.980 valid_lft forever preferred_lft forever 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # continue 2 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:01.980 192.168.100.9' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:01.980 192.168.100.9' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # head -n 1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:01.980 192.168.100.9' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # tail -n +2 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # head -n 1 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2633872 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2633872 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2633872 ']' 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.980 [2024-07-25 10:10:46.144914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:01.980 [2024-07-25 10:10:46.144957] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:01.980 EAL: No free 2048 kB hugepages reported on node 1 00:22:01.980 [2024-07-25 10:10:46.211514] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:01.980 [2024-07-25 10:10:46.292325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.980 [2024-07-25 10:10:46.292358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.980 [2024-07-25 10:10:46.292365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.980 [2024-07-25 10:10:46.292371] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.980 [2024-07-25 10:10:46.292376] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.980 [2024-07-25 10:10:46.292420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.980 [2024-07-25 10:10:46.292422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2633872 00:22:01.980 10:10:46 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:02.240 [2024-07-25 10:10:47.159373] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16f03c0/0x16f48b0) succeed. 00:22:02.240 [2024-07-25 10:10:47.168102] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16f1870/0x1735f40) succeed. 00:22:02.240 10:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:02.499 Malloc0 00:22:02.499 10:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:02.499 10:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:02.758 10:10:47 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:03.016 [2024-07-25 10:10:47.979013] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:03.016 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:22:03.016 [2024-07-25 10:10:48.167365] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2634347 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2634347 /var/tmp/bdevperf.sock 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2634347 ']' 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:03.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:03.276 10:10:48 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:04.213 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:04.213 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:22:04.213 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:04.213 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:04.472 Nvme0n1 00:22:04.472 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:04.731 Nvme0n1 00:22:04.731 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:04.731 10:10:49 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:06.634 10:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:06.634 10:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:06.893 10:10:51 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:07.164 10:10:52 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:08.131 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:08.131 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:08.131 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.131 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.390 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.649 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.649 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.649 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.649 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.907 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.907 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.908 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.908 10:10:53 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.908 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.908 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.908 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.908 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:09.166 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:09.166 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:09.166 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:09.425 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:09.425 10:10:54 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.802 10:10:55 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.061 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.061 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.061 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.061 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.320 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.320 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.320 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.320 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:11.579 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:11.838 10:10:56 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:12.097 10:10:57 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:13.033 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:13.033 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:13.033 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.033 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.292 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:13.551 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.551 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:13.552 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.552 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.810 10:10:58 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.069 10:10:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.069 10:10:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:14.069 10:10:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:14.328 10:10:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:14.328 10:10:59 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.706 10:11:00 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:15.966 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:15.966 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:15.966 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:15.966 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.225 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:16.483 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.483 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:16.483 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:16.742 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:17.000 10:11:01 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:17.936 10:11:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:17.936 10:11:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:17.936 10:11:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.936 10:11:02 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.195 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:18.453 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.453 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:18.453 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:18.453 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:18.711 10:11:03 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:18.970 10:11:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:18.970 10:11:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:18.970 10:11:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n inaccessible 00:22:19.228 10:11:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:19.228 10:11:04 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.605 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.864 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.864 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.864 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.864 10:11:05 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:21.123 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.123 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:21.123 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.123 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:21.382 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:21.642 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:21.642 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n optimized 00:22:21.901 10:11:06 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:21.901 10:11:07 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.277 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.537 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.537 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.537 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.537 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.796 10:11:08 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.055 10:11:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.055 10:11:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:24.055 10:11:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:24.314 10:11:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n optimized 00:22:24.573 10:11:09 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:25.543 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:25.543 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:25.543 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.543 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.802 10:11:10 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.061 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.061 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.061 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.061 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.320 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:26.579 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.579 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:26.579 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:26.837 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n non_optimized 00:22:26.838 10:11:11 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:28.215 10:11:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:28.215 10:11:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:28.215 10:11:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.215 10:11:12 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.215 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:28.473 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.473 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:28.473 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:28.473 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.731 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.731 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:28.731 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.731 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:28.990 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.990 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:28.990 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:28.990 10:11:13 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.990 10:11:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.990 10:11:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:28.990 10:11:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 -n non_optimized 00:22:29.249 10:11:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 -n inaccessible 00:22:29.508 10:11:14 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:30.444 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:30.444 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:30.444 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.444 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.703 10:11:15 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.962 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.962 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.962 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.962 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:31.221 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.221 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:31.221 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.221 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2634347 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2634347 ']' 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2634347 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2634347 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2634347' 00:22:31.481 killing process with pid 2634347 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2634347 00:22:31.481 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2634347 00:22:31.746 Connection closed with partial response: 00:22:31.746 00:22:31.746 00:22:31.746 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2634347 00:22:31.746 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:31.746 [2024-07-25 10:10:48.225085] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:31.746 [2024-07-25 10:10:48.225140] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2634347 ] 00:22:31.746 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.746 [2024-07-25 10:10:48.294113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.746 [2024-07-25 10:10:48.369112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:31.746 Running I/O for 90 seconds... 00:22:31.746 [2024-07-25 10:11:01.724957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.724999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:16952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:16976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:17000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:17032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:17080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.746 [2024-07-25 10:11:01.725485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:16440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x183100 00:22:31.746 [2024-07-25 10:11:01.725593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.746 [2024-07-25 10:11:01.725601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a8000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:16472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:16480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a0000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:16528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:16616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007582000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.725986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.725993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b0000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.747 [2024-07-25 10:11:01.726087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.747 [2024-07-25 10:11:01.726102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:16744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:16752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x183100 00:22:31.747 [2024-07-25 10:11:01.726375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.747 [2024-07-25 10:11:01.726384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:16880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b8000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b6000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b4000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075b2000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007588000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:01.726963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.726979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.726986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:01.727870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:17384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:01.727876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:126576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:14.445408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a4000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:14.445470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.748 [2024-07-25 10:11:14.445486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.748 [2024-07-25 10:11:14.445952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183100 00:22:31.748 [2024-07-25 10:11:14.445964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.445975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.445982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.445991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.445998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:127304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:127312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:126792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:126880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:126912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:127416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:127064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007598000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.749 [2024-07-25 10:11:14.446594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:126848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ac000 len:0x1000 key:0x183100 00:22:31.749 [2024-07-25 10:11:14.446612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.749 [2024-07-25 10:11:14.446622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:126888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:127112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:127120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.446951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.446960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:127184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.446967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.447905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.447922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.447935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.447942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.448823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.448840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:126624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.448909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.448936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.448944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.449447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.449459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.449473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.449480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.449490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.750 [2024-07-25 10:11:14.449498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.449508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x183100 00:22:31.750 [2024-07-25 10:11:14.449516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.750 [2024-07-25 10:11:14.449525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007594000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:127064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:127600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757e000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:127176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:127192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:127232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:127680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:127704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.449959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.449986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075bc000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.449993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.450010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.450027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.450043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:127392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007596000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.450061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.450079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:127768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.450096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.450113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.751 [2024-07-25 10:11:14.450135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.751 [2024-07-25 10:11:14.450145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:22:31.751 [2024-07-25 10:11:14.450153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:127496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:127512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127136 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a2000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:126624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.450436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.450445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.450452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:127920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007586000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:127992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:127560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ae000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:128040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.752 [2024-07-25 10:11:14.452695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:22:31.752 [2024-07-25 10:11:14.452711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:31.752 [2024-07-25 10:11:14.452721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:128088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:127800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075a6000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.452930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:127240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.452958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.452965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:127376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:127432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:127488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:127040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:126592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:127232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:127672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:127752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.753 [2024-07-25 10:11:14.453437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x183100 00:22:31.753 [2024-07-25 10:11:14.453470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:31.753 [2024-07-25 10:11:14.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:127480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:127512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:127536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:127832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:127608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:127856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:127224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:127896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.453667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.453676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.453683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:127976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:128024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:128048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:127696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759a000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:128080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:127784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075aa000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:127928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:127968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758a000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758c000 len:0x1000 key:0x183100 00:22:31.754 [2024-07-25 10:11:14.455959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:31.754 [2024-07-25 10:11:14.455968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:128240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.754 [2024-07-25 10:11:14.455975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.455985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:128256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.455991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:128120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:128176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ba000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000759e000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:127776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:127336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:126968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:126976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007590000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:127576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:127232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007580000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:127688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:127736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.755 [2024-07-25 10:11:14.456770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:127408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:127480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x183100 00:22:31.755 [2024-07-25 10:11:14.456803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:31.755 [2024-07-25 10:11:14.456813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000758e000 len:0x1000 key:0x183100 00:22:31.756 [2024-07-25 10:11:14.456820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:31.756 [2024-07-25 10:11:14.456829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007584000 len:0x1000 key:0x183100 00:22:31.756 [2024-07-25 10:11:14.456837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:31.756 [2024-07-25 10:11:14.456846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:127880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.756 [2024-07-25 10:11:14.456854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:31.756 [2024-07-25 10:11:14.456863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:126688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x183100 00:22:31.756 [2024-07-25 10:11:14.456869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:31.756 [2024-07-25 10:11:14.456880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:127264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x183100 00:22:31.756 [2024-07-25 10:11:14.456887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:31.756 [2024-07-25 10:11:14.456897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:127944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:31.756 [2024-07-25 10:11:14.456904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:31.756 Received shutdown signal, test time was about 26.756553 seconds 00:22:31.756 00:22:31.756 Latency(us) 00:22:31.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:31.756 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:31.756 Verification LBA range: start 0x0 length 0x4000 00:22:31.756 Nvme0n1 : 26.76 15543.93 60.72 0.00 0.00 8215.03 76.56 3019898.88 00:22:31.756 =================================================================================================================== 00:22:31.756 Total : 15543.93 60.72 0.00 0.00 8215.03 76.56 3019898.88 00:22:31.756 10:11:16 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:32.013 rmmod nvme_rdma 00:22:32.013 rmmod nvme_fabrics 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2633872 ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2633872 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2633872 ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2633872 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2633872 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2633872' 00:22:32.013 killing process with pid 2633872 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2633872 00:22:32.013 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2633872 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:32.272 00:22:32.272 real 0m36.996s 00:22:32.272 user 1m47.694s 00:22:32.272 sys 0m7.734s 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:32.272 ************************************ 00:22:32.272 END TEST nvmf_host_multipath_status 00:22:32.272 ************************************ 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.272 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.530 ************************************ 00:22:32.530 START TEST nvmf_discovery_remove_ifc 00:22:32.530 ************************************ 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:22:32.530 * Looking for test storage... 00:22:32.530 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.530 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:22:32.531 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:22:32.531 00:22:32.531 real 0m0.114s 00:22:32.531 user 0m0.052s 00:22:32.531 sys 0m0.069s 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:32.531 ************************************ 00:22:32.531 END TEST nvmf_discovery_remove_ifc 00:22:32.531 ************************************ 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.531 ************************************ 00:22:32.531 START TEST nvmf_identify_kernel_target 00:22:32.531 ************************************ 00:22:32.531 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=rdma 00:22:32.790 * Looking for test storage... 00:22:32.790 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.790 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.791 10:11:17 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.062 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:38.063 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:38.063 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:38.063 Found net devices under 0000:da:00.0: mlx_0_0 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:38.063 Found net devices under 0000:da:00.1: mlx_0_1 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@420 -- # rdma_device_init 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # uname 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.063 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:38.323 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:38.323 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:38.323 altname enp218s0f0np0 00:22:38.323 altname ens818f0np0 00:22:38.323 inet 192.168.100.8/24 scope global mlx_0_0 00:22:38.323 valid_lft forever preferred_lft forever 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:38.323 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:38.323 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:38.323 altname enp218s0f1np1 00:22:38.323 altname ens818f1np1 00:22:38.323 inet 192.168.100.9/24 scope global mlx_0_1 00:22:38.323 valid_lft forever preferred_lft forever 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.323 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # continue 2 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:38.324 192.168.100.9' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:38.324 192.168.100.9' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # head -n 1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:38.324 192.168.100.9' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # tail -n +2 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # head -n 1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=192.168.100.8 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 192.168.100.8 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=192.168.100.8 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:38.324 10:11:23 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:40.857 Waiting for block devices as requested 00:22:41.116 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:41.116 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:41.116 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:41.375 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:41.375 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:41.375 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:41.634 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:41.634 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:41.634 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:41.893 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:41.893 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:41.893 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:41.893 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:42.152 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:42.152 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:42.152 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:42.411 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:42.411 No valid GPT data, bailing 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo rdma 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:42.411 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:42.670 00:22:42.670 Discovery Log Number of Records 2, Generation counter 2 00:22:42.670 =====Discovery Log Entry 0====== 00:22:42.670 trtype: rdma 00:22:42.670 adrfam: ipv4 00:22:42.670 subtype: current discovery subsystem 00:22:42.670 treq: not specified, sq flow control disable supported 00:22:42.670 portid: 1 00:22:42.670 trsvcid: 4420 00:22:42.670 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:42.670 traddr: 192.168.100.8 00:22:42.670 eflags: none 00:22:42.670 rdma_prtype: not specified 00:22:42.670 rdma_qptype: connected 00:22:42.670 rdma_cms: rdma-cm 00:22:42.670 rdma_pkey: 0x0000 00:22:42.670 =====Discovery Log Entry 1====== 00:22:42.670 trtype: rdma 00:22:42.670 adrfam: ipv4 00:22:42.670 subtype: nvme subsystem 00:22:42.670 treq: not specified, sq flow control disable supported 00:22:42.670 portid: 1 00:22:42.670 trsvcid: 4420 00:22:42.670 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:42.670 traddr: 192.168.100.8 00:22:42.670 eflags: none 00:22:42.670 rdma_prtype: not specified 00:22:42.670 rdma_qptype: connected 00:22:42.670 rdma_cms: rdma-cm 00:22:42.670 rdma_pkey: 0x0000 00:22:42.670 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 00:22:42.670 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:42.670 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.670 ===================================================== 00:22:42.670 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:42.670 ===================================================== 00:22:42.670 Controller Capabilities/Features 00:22:42.670 ================================ 00:22:42.670 Vendor ID: 0000 00:22:42.670 Subsystem Vendor ID: 0000 00:22:42.670 Serial Number: 65eaa4defecb76a6c970 00:22:42.670 Model Number: Linux 00:22:42.670 Firmware Version: 6.7.0-68 00:22:42.670 Recommended Arb Burst: 0 00:22:42.670 IEEE OUI Identifier: 00 00 00 00:22:42.670 Multi-path I/O 00:22:42.670 May have multiple subsystem ports: No 00:22:42.670 May have multiple controllers: No 00:22:42.670 Associated with SR-IOV VF: No 00:22:42.670 Max Data Transfer Size: Unlimited 00:22:42.670 Max Number of Namespaces: 0 00:22:42.670 Max Number of I/O Queues: 1024 00:22:42.670 NVMe Specification Version (VS): 1.3 00:22:42.670 NVMe Specification Version (Identify): 1.3 00:22:42.670 Maximum Queue Entries: 128 00:22:42.670 Contiguous Queues Required: No 00:22:42.670 Arbitration Mechanisms Supported 00:22:42.670 Weighted Round Robin: Not Supported 00:22:42.670 Vendor Specific: Not Supported 00:22:42.670 Reset Timeout: 7500 ms 00:22:42.670 Doorbell Stride: 4 bytes 00:22:42.670 NVM Subsystem Reset: Not Supported 00:22:42.670 Command Sets Supported 00:22:42.670 NVM Command Set: Supported 00:22:42.670 Boot Partition: Not Supported 00:22:42.670 Memory Page Size Minimum: 4096 bytes 00:22:42.670 Memory Page Size Maximum: 4096 bytes 00:22:42.670 Persistent Memory Region: Not Supported 00:22:42.670 Optional Asynchronous Events Supported 00:22:42.670 Namespace Attribute Notices: Not Supported 00:22:42.670 Firmware Activation Notices: Not Supported 00:22:42.670 ANA Change Notices: Not Supported 00:22:42.670 PLE Aggregate Log Change Notices: Not Supported 00:22:42.670 LBA Status Info Alert Notices: Not Supported 00:22:42.670 EGE Aggregate Log Change Notices: Not Supported 00:22:42.670 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.670 Zone Descriptor Change Notices: Not Supported 00:22:42.670 Discovery Log Change Notices: Supported 00:22:42.670 Controller Attributes 00:22:42.670 128-bit Host Identifier: Not Supported 00:22:42.670 Non-Operational Permissive Mode: Not Supported 00:22:42.670 NVM Sets: Not Supported 00:22:42.670 Read Recovery Levels: Not Supported 00:22:42.670 Endurance Groups: Not Supported 00:22:42.670 Predictable Latency Mode: Not Supported 00:22:42.670 Traffic Based Keep ALive: Not Supported 00:22:42.670 Namespace Granularity: Not Supported 00:22:42.670 SQ Associations: Not Supported 00:22:42.670 UUID List: Not Supported 00:22:42.670 Multi-Domain Subsystem: Not Supported 00:22:42.670 Fixed Capacity Management: Not Supported 00:22:42.670 Variable Capacity Management: Not Supported 00:22:42.671 Delete Endurance Group: Not Supported 00:22:42.671 Delete NVM Set: Not Supported 00:22:42.671 Extended LBA Formats Supported: Not Supported 00:22:42.671 Flexible Data Placement Supported: Not Supported 00:22:42.671 00:22:42.671 Controller Memory Buffer Support 00:22:42.671 ================================ 00:22:42.671 Supported: No 00:22:42.671 00:22:42.671 Persistent Memory Region Support 00:22:42.671 ================================ 00:22:42.671 Supported: No 00:22:42.671 00:22:42.671 Admin Command Set Attributes 00:22:42.671 ============================ 00:22:42.671 Security Send/Receive: Not Supported 00:22:42.671 Format NVM: Not Supported 00:22:42.671 Firmware Activate/Download: Not Supported 00:22:42.671 Namespace Management: Not Supported 00:22:42.671 Device Self-Test: Not Supported 00:22:42.671 Directives: Not Supported 00:22:42.671 NVMe-MI: Not Supported 00:22:42.671 Virtualization Management: Not Supported 00:22:42.671 Doorbell Buffer Config: Not Supported 00:22:42.671 Get LBA Status Capability: Not Supported 00:22:42.671 Command & Feature Lockdown Capability: Not Supported 00:22:42.671 Abort Command Limit: 1 00:22:42.671 Async Event Request Limit: 1 00:22:42.671 Number of Firmware Slots: N/A 00:22:42.671 Firmware Slot 1 Read-Only: N/A 00:22:42.671 Firmware Activation Without Reset: N/A 00:22:42.671 Multiple Update Detection Support: N/A 00:22:42.671 Firmware Update Granularity: No Information Provided 00:22:42.671 Per-Namespace SMART Log: No 00:22:42.671 Asymmetric Namespace Access Log Page: Not Supported 00:22:42.671 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:42.671 Command Effects Log Page: Not Supported 00:22:42.671 Get Log Page Extended Data: Supported 00:22:42.671 Telemetry Log Pages: Not Supported 00:22:42.671 Persistent Event Log Pages: Not Supported 00:22:42.671 Supported Log Pages Log Page: May Support 00:22:42.671 Commands Supported & Effects Log Page: Not Supported 00:22:42.671 Feature Identifiers & Effects Log Page:May Support 00:22:42.671 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.671 Data Area 4 for Telemetry Log: Not Supported 00:22:42.671 Error Log Page Entries Supported: 1 00:22:42.671 Keep Alive: Not Supported 00:22:42.671 00:22:42.671 NVM Command Set Attributes 00:22:42.671 ========================== 00:22:42.671 Submission Queue Entry Size 00:22:42.671 Max: 1 00:22:42.671 Min: 1 00:22:42.671 Completion Queue Entry Size 00:22:42.671 Max: 1 00:22:42.671 Min: 1 00:22:42.671 Number of Namespaces: 0 00:22:42.671 Compare Command: Not Supported 00:22:42.671 Write Uncorrectable Command: Not Supported 00:22:42.671 Dataset Management Command: Not Supported 00:22:42.671 Write Zeroes Command: Not Supported 00:22:42.671 Set Features Save Field: Not Supported 00:22:42.671 Reservations: Not Supported 00:22:42.671 Timestamp: Not Supported 00:22:42.671 Copy: Not Supported 00:22:42.671 Volatile Write Cache: Not Present 00:22:42.671 Atomic Write Unit (Normal): 1 00:22:42.671 Atomic Write Unit (PFail): 1 00:22:42.671 Atomic Compare & Write Unit: 1 00:22:42.671 Fused Compare & Write: Not Supported 00:22:42.671 Scatter-Gather List 00:22:42.671 SGL Command Set: Supported 00:22:42.671 SGL Keyed: Supported 00:22:42.671 SGL Bit Bucket Descriptor: Not Supported 00:22:42.671 SGL Metadata Pointer: Not Supported 00:22:42.671 Oversized SGL: Not Supported 00:22:42.671 SGL Metadata Address: Not Supported 00:22:42.671 SGL Offset: Supported 00:22:42.671 Transport SGL Data Block: Not Supported 00:22:42.671 Replay Protected Memory Block: Not Supported 00:22:42.671 00:22:42.671 Firmware Slot Information 00:22:42.671 ========================= 00:22:42.671 Active slot: 0 00:22:42.671 00:22:42.671 00:22:42.671 Error Log 00:22:42.671 ========= 00:22:42.671 00:22:42.671 Active Namespaces 00:22:42.671 ================= 00:22:42.671 Discovery Log Page 00:22:42.671 ================== 00:22:42.671 Generation Counter: 2 00:22:42.671 Number of Records: 2 00:22:42.671 Record Format: 0 00:22:42.671 00:22:42.671 Discovery Log Entry 0 00:22:42.671 ---------------------- 00:22:42.671 Transport Type: 1 (RDMA) 00:22:42.671 Address Family: 1 (IPv4) 00:22:42.671 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:42.671 Entry Flags: 00:22:42.671 Duplicate Returned Information: 0 00:22:42.671 Explicit Persistent Connection Support for Discovery: 0 00:22:42.671 Transport Requirements: 00:22:42.671 Secure Channel: Not Specified 00:22:42.671 Port ID: 1 (0x0001) 00:22:42.671 Controller ID: 65535 (0xffff) 00:22:42.671 Admin Max SQ Size: 32 00:22:42.671 Transport Service Identifier: 4420 00:22:42.671 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:42.671 Transport Address: 192.168.100.8 00:22:42.671 Transport Specific Address Subtype - RDMA 00:22:42.671 RDMA QP Service Type: 1 (Reliable Connected) 00:22:42.671 RDMA Provider Type: 1 (No provider specified) 00:22:42.671 RDMA CM Service: 1 (RDMA_CM) 00:22:42.671 Discovery Log Entry 1 00:22:42.671 ---------------------- 00:22:42.671 Transport Type: 1 (RDMA) 00:22:42.671 Address Family: 1 (IPv4) 00:22:42.671 Subsystem Type: 2 (NVM Subsystem) 00:22:42.671 Entry Flags: 00:22:42.671 Duplicate Returned Information: 0 00:22:42.671 Explicit Persistent Connection Support for Discovery: 0 00:22:42.671 Transport Requirements: 00:22:42.671 Secure Channel: Not Specified 00:22:42.671 Port ID: 1 (0x0001) 00:22:42.671 Controller ID: 65535 (0xffff) 00:22:42.671 Admin Max SQ Size: 32 00:22:42.671 Transport Service Identifier: 4420 00:22:42.671 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:42.671 Transport Address: 192.168.100.8 00:22:42.671 Transport Specific Address Subtype - RDMA 00:22:42.671 RDMA QP Service Type: 1 (Reliable Connected) 00:22:42.931 RDMA Provider Type: 1 (No provider specified) 00:22:42.931 RDMA CM Service: 1 (RDMA_CM) 00:22:42.931 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:42.931 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.931 get_feature(0x01) failed 00:22:42.931 get_feature(0x02) failed 00:22:42.931 get_feature(0x04) failed 00:22:42.931 ===================================================== 00:22:42.931 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:testnqn 00:22:42.931 ===================================================== 00:22:42.931 Controller Capabilities/Features 00:22:42.931 ================================ 00:22:42.931 Vendor ID: 0000 00:22:42.931 Subsystem Vendor ID: 0000 00:22:42.931 Serial Number: a7b2c77da32338291240 00:22:42.931 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:42.931 Firmware Version: 6.7.0-68 00:22:42.931 Recommended Arb Burst: 6 00:22:42.931 IEEE OUI Identifier: 00 00 00 00:22:42.931 Multi-path I/O 00:22:42.931 May have multiple subsystem ports: Yes 00:22:42.931 May have multiple controllers: Yes 00:22:42.931 Associated with SR-IOV VF: No 00:22:42.931 Max Data Transfer Size: 1048576 00:22:42.931 Max Number of Namespaces: 1024 00:22:42.931 Max Number of I/O Queues: 128 00:22:42.931 NVMe Specification Version (VS): 1.3 00:22:42.931 NVMe Specification Version (Identify): 1.3 00:22:42.931 Maximum Queue Entries: 128 00:22:42.931 Contiguous Queues Required: No 00:22:42.931 Arbitration Mechanisms Supported 00:22:42.931 Weighted Round Robin: Not Supported 00:22:42.931 Vendor Specific: Not Supported 00:22:42.931 Reset Timeout: 7500 ms 00:22:42.931 Doorbell Stride: 4 bytes 00:22:42.931 NVM Subsystem Reset: Not Supported 00:22:42.931 Command Sets Supported 00:22:42.931 NVM Command Set: Supported 00:22:42.931 Boot Partition: Not Supported 00:22:42.931 Memory Page Size Minimum: 4096 bytes 00:22:42.931 Memory Page Size Maximum: 4096 bytes 00:22:42.931 Persistent Memory Region: Not Supported 00:22:42.931 Optional Asynchronous Events Supported 00:22:42.931 Namespace Attribute Notices: Supported 00:22:42.931 Firmware Activation Notices: Not Supported 00:22:42.931 ANA Change Notices: Supported 00:22:42.931 PLE Aggregate Log Change Notices: Not Supported 00:22:42.931 LBA Status Info Alert Notices: Not Supported 00:22:42.931 EGE Aggregate Log Change Notices: Not Supported 00:22:42.931 Normal NVM Subsystem Shutdown event: Not Supported 00:22:42.931 Zone Descriptor Change Notices: Not Supported 00:22:42.931 Discovery Log Change Notices: Not Supported 00:22:42.931 Controller Attributes 00:22:42.931 128-bit Host Identifier: Supported 00:22:42.931 Non-Operational Permissive Mode: Not Supported 00:22:42.931 NVM Sets: Not Supported 00:22:42.931 Read Recovery Levels: Not Supported 00:22:42.931 Endurance Groups: Not Supported 00:22:42.931 Predictable Latency Mode: Not Supported 00:22:42.931 Traffic Based Keep ALive: Supported 00:22:42.931 Namespace Granularity: Not Supported 00:22:42.931 SQ Associations: Not Supported 00:22:42.931 UUID List: Not Supported 00:22:42.931 Multi-Domain Subsystem: Not Supported 00:22:42.931 Fixed Capacity Management: Not Supported 00:22:42.931 Variable Capacity Management: Not Supported 00:22:42.931 Delete Endurance Group: Not Supported 00:22:42.931 Delete NVM Set: Not Supported 00:22:42.931 Extended LBA Formats Supported: Not Supported 00:22:42.931 Flexible Data Placement Supported: Not Supported 00:22:42.931 00:22:42.931 Controller Memory Buffer Support 00:22:42.931 ================================ 00:22:42.931 Supported: No 00:22:42.931 00:22:42.931 Persistent Memory Region Support 00:22:42.931 ================================ 00:22:42.931 Supported: No 00:22:42.931 00:22:42.931 Admin Command Set Attributes 00:22:42.931 ============================ 00:22:42.931 Security Send/Receive: Not Supported 00:22:42.931 Format NVM: Not Supported 00:22:42.931 Firmware Activate/Download: Not Supported 00:22:42.931 Namespace Management: Not Supported 00:22:42.931 Device Self-Test: Not Supported 00:22:42.931 Directives: Not Supported 00:22:42.931 NVMe-MI: Not Supported 00:22:42.931 Virtualization Management: Not Supported 00:22:42.931 Doorbell Buffer Config: Not Supported 00:22:42.931 Get LBA Status Capability: Not Supported 00:22:42.931 Command & Feature Lockdown Capability: Not Supported 00:22:42.931 Abort Command Limit: 4 00:22:42.931 Async Event Request Limit: 4 00:22:42.931 Number of Firmware Slots: N/A 00:22:42.931 Firmware Slot 1 Read-Only: N/A 00:22:42.931 Firmware Activation Without Reset: N/A 00:22:42.931 Multiple Update Detection Support: N/A 00:22:42.931 Firmware Update Granularity: No Information Provided 00:22:42.931 Per-Namespace SMART Log: Yes 00:22:42.931 Asymmetric Namespace Access Log Page: Supported 00:22:42.931 ANA Transition Time : 10 sec 00:22:42.931 00:22:42.931 Asymmetric Namespace Access Capabilities 00:22:42.931 ANA Optimized State : Supported 00:22:42.931 ANA Non-Optimized State : Supported 00:22:42.931 ANA Inaccessible State : Supported 00:22:42.931 ANA Persistent Loss State : Supported 00:22:42.931 ANA Change State : Supported 00:22:42.932 ANAGRPID is not changed : No 00:22:42.932 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:42.932 00:22:42.932 ANA Group Identifier Maximum : 128 00:22:42.932 Number of ANA Group Identifiers : 128 00:22:42.932 Max Number of Allowed Namespaces : 1024 00:22:42.932 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:42.932 Command Effects Log Page: Supported 00:22:42.932 Get Log Page Extended Data: Supported 00:22:42.932 Telemetry Log Pages: Not Supported 00:22:42.932 Persistent Event Log Pages: Not Supported 00:22:42.932 Supported Log Pages Log Page: May Support 00:22:42.932 Commands Supported & Effects Log Page: Not Supported 00:22:42.932 Feature Identifiers & Effects Log Page:May Support 00:22:42.932 NVMe-MI Commands & Effects Log Page: May Support 00:22:42.932 Data Area 4 for Telemetry Log: Not Supported 00:22:42.932 Error Log Page Entries Supported: 128 00:22:42.932 Keep Alive: Supported 00:22:42.932 Keep Alive Granularity: 1000 ms 00:22:42.932 00:22:42.932 NVM Command Set Attributes 00:22:42.932 ========================== 00:22:42.932 Submission Queue Entry Size 00:22:42.932 Max: 64 00:22:42.932 Min: 64 00:22:42.932 Completion Queue Entry Size 00:22:42.932 Max: 16 00:22:42.932 Min: 16 00:22:42.932 Number of Namespaces: 1024 00:22:42.932 Compare Command: Not Supported 00:22:42.932 Write Uncorrectable Command: Not Supported 00:22:42.932 Dataset Management Command: Supported 00:22:42.932 Write Zeroes Command: Supported 00:22:42.932 Set Features Save Field: Not Supported 00:22:42.932 Reservations: Not Supported 00:22:42.932 Timestamp: Not Supported 00:22:42.932 Copy: Not Supported 00:22:42.932 Volatile Write Cache: Present 00:22:42.932 Atomic Write Unit (Normal): 1 00:22:42.932 Atomic Write Unit (PFail): 1 00:22:42.932 Atomic Compare & Write Unit: 1 00:22:42.932 Fused Compare & Write: Not Supported 00:22:42.932 Scatter-Gather List 00:22:42.932 SGL Command Set: Supported 00:22:42.932 SGL Keyed: Supported 00:22:42.932 SGL Bit Bucket Descriptor: Not Supported 00:22:42.932 SGL Metadata Pointer: Not Supported 00:22:42.932 Oversized SGL: Not Supported 00:22:42.932 SGL Metadata Address: Not Supported 00:22:42.932 SGL Offset: Supported 00:22:42.932 Transport SGL Data Block: Not Supported 00:22:42.932 Replay Protected Memory Block: Not Supported 00:22:42.932 00:22:42.932 Firmware Slot Information 00:22:42.932 ========================= 00:22:42.932 Active slot: 0 00:22:42.932 00:22:42.932 Asymmetric Namespace Access 00:22:42.932 =========================== 00:22:42.932 Change Count : 0 00:22:42.932 Number of ANA Group Descriptors : 1 00:22:42.932 ANA Group Descriptor : 0 00:22:42.932 ANA Group ID : 1 00:22:42.932 Number of NSID Values : 1 00:22:42.932 Change Count : 0 00:22:42.932 ANA State : 1 00:22:42.932 Namespace Identifier : 1 00:22:42.932 00:22:42.932 Commands Supported and Effects 00:22:42.932 ============================== 00:22:42.932 Admin Commands 00:22:42.932 -------------- 00:22:42.932 Get Log Page (02h): Supported 00:22:42.932 Identify (06h): Supported 00:22:42.932 Abort (08h): Supported 00:22:42.932 Set Features (09h): Supported 00:22:42.932 Get Features (0Ah): Supported 00:22:42.932 Asynchronous Event Request (0Ch): Supported 00:22:42.932 Keep Alive (18h): Supported 00:22:42.932 I/O Commands 00:22:42.932 ------------ 00:22:42.932 Flush (00h): Supported 00:22:42.932 Write (01h): Supported LBA-Change 00:22:42.932 Read (02h): Supported 00:22:42.932 Write Zeroes (08h): Supported LBA-Change 00:22:42.932 Dataset Management (09h): Supported 00:22:42.932 00:22:42.932 Error Log 00:22:42.932 ========= 00:22:42.932 Entry: 0 00:22:42.932 Error Count: 0x3 00:22:42.932 Submission Queue Id: 0x0 00:22:42.932 Command Id: 0x5 00:22:42.932 Phase Bit: 0 00:22:42.932 Status Code: 0x2 00:22:42.932 Status Code Type: 0x0 00:22:42.932 Do Not Retry: 1 00:22:42.932 Error Location: 0x28 00:22:42.932 LBA: 0x0 00:22:42.932 Namespace: 0x0 00:22:42.932 Vendor Log Page: 0x0 00:22:42.932 ----------- 00:22:42.932 Entry: 1 00:22:42.932 Error Count: 0x2 00:22:42.932 Submission Queue Id: 0x0 00:22:42.932 Command Id: 0x5 00:22:42.932 Phase Bit: 0 00:22:42.932 Status Code: 0x2 00:22:42.932 Status Code Type: 0x0 00:22:42.932 Do Not Retry: 1 00:22:42.932 Error Location: 0x28 00:22:42.932 LBA: 0x0 00:22:42.932 Namespace: 0x0 00:22:42.932 Vendor Log Page: 0x0 00:22:42.932 ----------- 00:22:42.932 Entry: 2 00:22:42.932 Error Count: 0x1 00:22:42.932 Submission Queue Id: 0x0 00:22:42.932 Command Id: 0x0 00:22:42.932 Phase Bit: 0 00:22:42.932 Status Code: 0x2 00:22:42.932 Status Code Type: 0x0 00:22:42.932 Do Not Retry: 1 00:22:42.932 Error Location: 0x28 00:22:42.932 LBA: 0x0 00:22:42.932 Namespace: 0x0 00:22:42.932 Vendor Log Page: 0x0 00:22:42.932 00:22:42.932 Number of Queues 00:22:42.932 ================ 00:22:42.932 Number of I/O Submission Queues: 128 00:22:42.932 Number of I/O Completion Queues: 128 00:22:42.932 00:22:42.932 ZNS Specific Controller Data 00:22:42.932 ============================ 00:22:42.932 Zone Append Size Limit: 0 00:22:42.932 00:22:42.932 00:22:42.932 Active Namespaces 00:22:42.932 ================= 00:22:42.932 get_feature(0x05) failed 00:22:42.932 Namespace ID:1 00:22:42.932 Command Set Identifier: NVM (00h) 00:22:42.932 Deallocate: Supported 00:22:42.932 Deallocated/Unwritten Error: Not Supported 00:22:42.932 Deallocated Read Value: Unknown 00:22:42.932 Deallocate in Write Zeroes: Not Supported 00:22:42.932 Deallocated Guard Field: 0xFFFF 00:22:42.932 Flush: Supported 00:22:42.932 Reservation: Not Supported 00:22:42.932 Namespace Sharing Capabilities: Multiple Controllers 00:22:42.932 Size (in LBAs): 3125627568 (1490GiB) 00:22:42.932 Capacity (in LBAs): 3125627568 (1490GiB) 00:22:42.932 Utilization (in LBAs): 3125627568 (1490GiB) 00:22:42.932 UUID: cbd72805-e873-4d19-b370-be31632a0cbd 00:22:42.932 Thin Provisioning: Not Supported 00:22:42.932 Per-NS Atomic Units: Yes 00:22:42.932 Atomic Boundary Size (Normal): 0 00:22:42.932 Atomic Boundary Size (PFail): 0 00:22:42.932 Atomic Boundary Offset: 0 00:22:42.932 NGUID/EUI64 Never Reused: No 00:22:42.932 ANA group ID: 1 00:22:42.932 Namespace Write Protected: No 00:22:42.932 Number of LBA Formats: 1 00:22:42.932 Current LBA Format: LBA Format #00 00:22:42.932 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:42.932 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.932 10:11:27 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:22:42.932 rmmod nvme_rdma 00:22:42.932 rmmod nvme_fabrics 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:22:42.932 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:42.933 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:22:42.933 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:22:42.933 10:11:28 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:22:46.221 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:22:46.221 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:22:47.193 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:22:47.461 00:22:47.461 real 0m14.784s 00:22:47.461 user 0m4.167s 00:22:47.461 sys 0m8.490s 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.461 ************************************ 00:22:47.461 END TEST nvmf_identify_kernel_target 00:22:47.461 ************************************ 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:47.461 ************************************ 00:22:47.461 START TEST nvmf_auth_host 00:22:47.461 ************************************ 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=rdma 00:22:47.461 * Looking for test storage... 00:22:47.461 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:47.461 10:11:32 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:22:54.026 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:22:54.027 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:22:54.027 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:22:54.027 Found net devices under 0000:da:00.0: mlx_0_0 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:22:54.027 Found net devices under 0000:da:00.1: mlx_0_1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@420 -- # rdma_device_init 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # uname 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@62 -- # modprobe ib_cm 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@63 -- # modprobe ib_core 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@64 -- # modprobe ib_umad 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@66 -- # modprobe iw_cm 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # allocate_nic_ips 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # get_rdma_if_list 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:22:54.027 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.027 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:22:54.027 altname enp218s0f0np0 00:22:54.027 altname ens818f0np0 00:22:54.027 inet 192.168.100.8/24 scope global mlx_0_0 00:22:54.027 valid_lft forever preferred_lft forever 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:22:54.027 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:22:54.027 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:54.028 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:22:54.028 altname enp218s0f1np1 00:22:54.028 altname ens818f1np1 00:22:54.028 inet 192.168.100.9/24 scope global mlx_0_1 00:22:54.028 valid_lft forever preferred_lft forever 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # get_rdma_if_list 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_0 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # echo mlx_0_1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # continue 2 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # awk '{print $4}' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@113 -- # cut -d/ -f1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:22:54.028 192.168.100.9' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:22:54.028 192.168.100.9' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # head -n 1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:22:54.028 192.168.100.9' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # tail -n +2 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # head -n 1 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2648425 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2648425 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2648425 ']' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.028 10:11:38 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=1f559750d9eb23718753e52fa1690f62 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.O3h 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 1f559750d9eb23718753e52fa1690f62 0 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 1f559750d9eb23718753e52fa1690f62 0 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=1f559750d9eb23718753e52fa1690f62 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.O3h 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.O3h 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.O3h 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bbb60759e914c9013569aca59891d20ec9e8d187e4cdf34e08ca6e84a7388f07 00:22:54.028 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.g6s 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bbb60759e914c9013569aca59891d20ec9e8d187e4cdf34e08ca6e84a7388f07 3 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bbb60759e914c9013569aca59891d20ec9e8d187e4cdf34e08ca6e84a7388f07 3 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bbb60759e914c9013569aca59891d20ec9e8d187e4cdf34e08ca6e84a7388f07 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.g6s 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.g6s 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.g6s 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=30ca18903419994c0c1202ce8b149f426de39fcd3ba2c27b 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JaJ 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 30ca18903419994c0c1202ce8b149f426de39fcd3ba2c27b 0 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 30ca18903419994c0c1202ce8b149f426de39fcd3ba2c27b 0 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=30ca18903419994c0c1202ce8b149f426de39fcd3ba2c27b 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JaJ 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JaJ 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JaJ 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dfd53af78953bde366a72308be216e9f1f2b932b9c4014b3 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.HAn 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dfd53af78953bde366a72308be216e9f1f2b932b9c4014b3 2 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dfd53af78953bde366a72308be216e9f1f2b932b9c4014b3 2 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dfd53af78953bde366a72308be216e9f1f2b932b9c4014b3 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.HAn 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.HAn 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.HAn 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=50bd84314ee9e42d9db0cf95603b0600 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rR1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 50bd84314ee9e42d9db0cf95603b0600 1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 50bd84314ee9e42d9db0cf95603b0600 1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=50bd84314ee9e42d9db0cf95603b0600 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rR1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rR1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.rR1 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:22:54.287 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=02e094b2a0d9931212ebc39a28b62458 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rZY 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 02e094b2a0d9931212ebc39a28b62458 1 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 02e094b2a0d9931212ebc39a28b62458 1 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=02e094b2a0d9931212ebc39a28b62458 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:22:54.288 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rZY 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rZY 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.rZY 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7e49661734e9b0fb1bca8daaa95d0092ff5ddb2d1d6860ae 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.tmi 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7e49661734e9b0fb1bca8daaa95d0092ff5ddb2d1d6860ae 2 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7e49661734e9b0fb1bca8daaa95d0092ff5ddb2d1d6860ae 2 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7e49661734e9b0fb1bca8daaa95d0092ff5ddb2d1d6860ae 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.tmi 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.tmi 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.tmi 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b5d24fcfd1aadbe7d10825f468d31998 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ePa 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b5d24fcfd1aadbe7d10825f468d31998 0 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b5d24fcfd1aadbe7d10825f468d31998 0 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b5d24fcfd1aadbe7d10825f468d31998 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ePa 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ePa 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ePa 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:22:54.546 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8b55d8a276f885bc56cdd904ab8e9fbe55562a109c816b0dabeb8da2df6f3ba8 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.F4e 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8b55d8a276f885bc56cdd904ab8e9fbe55562a109c816b0dabeb8da2df6f3ba8 3 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8b55d8a276f885bc56cdd904ab8e9fbe55562a109c816b0dabeb8da2df6f3ba8 3 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8b55d8a276f885bc56cdd904ab8e9fbe55562a109c816b0dabeb8da2df6f3ba8 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.F4e 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.F4e 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.F4e 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2648425 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2648425 ']' 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.547 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.O3h 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.805 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.g6s ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.g6s 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JaJ 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.HAn ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.HAn 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.rR1 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.rZY ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rZY 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.tmi 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ePa ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ePa 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.F4e 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 192.168.100.8 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=192.168.100.8 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:54.806 10:11:39 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:22:58.089 Waiting for block devices as requested 00:22:58.089 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:22:58.089 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:58.089 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:22:58.347 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:22:58.347 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:22:58.347 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:22:58.604 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:22:58.604 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:22:58.604 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:22:58.862 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:59.428 No valid GPT data, bailing 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 192.168.100.8 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo rdma 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:59.428 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 192.168.100.8 -t rdma -s 4420 00:22:59.687 00:22:59.687 Discovery Log Number of Records 2, Generation counter 2 00:22:59.687 =====Discovery Log Entry 0====== 00:22:59.687 trtype: rdma 00:22:59.687 adrfam: ipv4 00:22:59.687 subtype: current discovery subsystem 00:22:59.687 treq: not specified, sq flow control disable supported 00:22:59.687 portid: 1 00:22:59.687 trsvcid: 4420 00:22:59.687 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:59.687 traddr: 192.168.100.8 00:22:59.687 eflags: none 00:22:59.687 rdma_prtype: not specified 00:22:59.687 rdma_qptype: connected 00:22:59.687 rdma_cms: rdma-cm 00:22:59.687 rdma_pkey: 0x0000 00:22:59.687 =====Discovery Log Entry 1====== 00:22:59.687 trtype: rdma 00:22:59.687 adrfam: ipv4 00:22:59.687 subtype: nvme subsystem 00:22:59.687 treq: not specified, sq flow control disable supported 00:22:59.687 portid: 1 00:22:59.687 trsvcid: 4420 00:22:59.687 subnqn: nqn.2024-02.io.spdk:cnode0 00:22:59.687 traddr: 192.168.100.8 00:22:59.687 eflags: none 00:22:59.687 rdma_prtype: not specified 00:22:59.687 rdma_qptype: connected 00:22:59.687 rdma_cms: rdma-cm 00:22:59.687 rdma_pkey: 0x0000 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:59.687 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.688 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.688 nvme0n1 00:22:59.946 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.946 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:22:59.946 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:22:59.946 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.946 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.947 10:11:44 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.206 nvme0n1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.206 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.465 nvme0n1 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:00.465 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.466 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.724 nvme0n1 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.724 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.725 10:11:45 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.984 nvme0n1 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:00.984 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.243 nvme0n1 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:01.243 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.244 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.502 nvme0n1 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.502 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:01.761 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:01.762 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.021 nvme0n1 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.021 10:11:46 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.021 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.280 nvme0n1 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.280 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.281 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 nvme0n1 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.540 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.799 nvme0n1 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:02.799 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:02.800 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.059 10:11:47 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.318 nvme0n1 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:03.318 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.319 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 nvme0n1 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.578 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:03.837 10:11:48 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.097 nvme0n1 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.097 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.356 nvme0n1 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.356 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.615 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 nvme0n1 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:04.882 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:04.883 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:04.883 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:04.883 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.883 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:04.883 10:11:49 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.450 nvme0n1 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:05.450 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:05.451 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 nvme0n1 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.019 10:11:50 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.278 nvme0n1 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.278 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:06.537 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.538 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.797 nvme0n1 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.797 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.055 10:11:51 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.314 nvme0n1 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.314 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.573 10:11:52 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.141 nvme0n1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.141 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 nvme0n1 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:08.747 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.006 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.007 10:11:53 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.575 nvme0n1 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.575 10:11:54 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.143 nvme0n1 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.143 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.402 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.970 nvme0n1 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.970 10:11:55 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.229 nvme0n1 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.229 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.230 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.488 nvme0n1 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.488 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.489 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.747 nvme0n1 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.747 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.748 10:11:56 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.007 nvme0n1 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.007 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.266 nvme0n1 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:12.266 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.525 nvme0n1 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.525 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.784 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.785 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.044 nvme0n1 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.044 10:11:57 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.044 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.303 nvme0n1 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.303 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.562 nvme0n1 00:23:13.562 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.562 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.562 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.562 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.562 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.563 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.820 nvme0n1 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.821 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:14.080 10:11:58 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.080 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.081 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.340 nvme0n1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.340 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.599 nvme0n1 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.599 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:14.857 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.858 10:11:59 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.116 nvme0n1 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.116 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.374 nvme0n1 00:23:15.374 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.375 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.634 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.894 nvme0n1 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.894 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.895 10:12:00 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.461 nvme0n1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.461 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.033 nvme0n1 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.033 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.034 10:12:01 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.293 nvme0n1 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.293 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:17.551 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:17.552 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.552 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.810 nvme0n1 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.810 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.069 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.069 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.069 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.069 10:12:02 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.069 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.328 nvme0n1 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.328 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.587 10:12:03 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 nvme0n1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.153 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 nvme0n1 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.089 10:12:04 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.657 nvme0n1 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.657 10:12:05 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.224 nvme0n1 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.224 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.484 10:12:06 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.052 nvme0n1 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.052 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.053 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.312 nvme0n1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.312 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.572 nvme0n1 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.572 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.831 nvme0n1 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.831 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.089 10:12:07 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.089 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.090 nvme0n1 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.090 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.348 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.348 nvme0n1 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.608 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.866 nvme0n1 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:23.866 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:23.867 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:23.867 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:23.867 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.867 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.867 10:12:08 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 nvme0n1 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.125 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.383 nvme0n1 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.383 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.384 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.642 nvme0n1 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.642 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:24.906 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.907 10:12:09 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.194 nvme0n1 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.194 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.461 nvme0n1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.461 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.769 nvme0n1 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.769 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.028 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.029 10:12:10 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.287 nvme0n1 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.287 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 nvme0n1 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.546 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.805 10:12:11 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.065 nvme0n1 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.065 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 nvme0n1 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.633 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.634 10:12:12 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.200 nvme0n1 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.200 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.201 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.458 nvme0n1 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.458 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.459 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.717 10:12:13 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.975 nvme0n1 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.975 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.976 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.234 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.493 nvme0n1 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.493 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWY1NTk3NTBkOWViMjM3MTg3NTNlNTJmYTE2OTBmNjLpstae: 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YmJiNjA3NTllOTE0YzkwMTM1NjlhY2E1OTg5MWQyMGVjOWU4ZDE4N2U0Y2RmMzRlMDhjYTZlODRhNzM4OGYwNxsFoJU=: 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.752 10:12:14 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.319 nvme0n1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.319 10:12:15 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.886 nvme0n1 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.886 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTBiZDg0MzE0ZWU5ZTQyZDlkYjBjZjk1NjAzYjA2MDADfMeX: 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDJlMDk0YjJhMGQ5OTMxMjEyZWJjMzlhMjhiNjI0NTjciuFF: 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.145 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.714 nvme0n1 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:N2U0OTY2MTczNGU5YjBmYjFiY2E4ZGFhYTk1ZDAwOTJmZjVkZGIyZDFkNjg2MGFlQc+zXA==: 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjVkMjRmY2ZkMWFhZGJlN2QxMDgyNWY0NjhkMzE5OTgMfAMQ: 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.714 10:12:16 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.283 nvme0n1 00:23:32.283 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.283 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.283 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.283 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.283 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGI1NWQ4YTI3NmY4ODViYzU2Y2RkOTA0YWI4ZTlmYmU1NTU2MmExMDljODE2YjBkYWJlYjhkYTJkZjZmM2JhOH4dTfM=: 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.542 10:12:17 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.109 nvme0n1 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzBjYTE4OTAzNDE5OTk0YzBjMTIwMmNlOGIxNDlmNDI2ZGUzOWZjZDNiYTJjMjdi8KAYYQ==: 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGZkNTNhZjc4OTUzYmRlMzY2YTcyMzA4YmUyMTZlOWYxZjJiOTMyYjljNDAxNGIzH9k6uw==: 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.109 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.110 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.369 request: 00:23:33.369 { 00:23:33.369 "name": "nvme0", 00:23:33.369 "trtype": "rdma", 00:23:33.369 "traddr": "192.168.100.8", 00:23:33.369 "adrfam": "ipv4", 00:23:33.369 "trsvcid": "4420", 00:23:33.369 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:33.369 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:33.369 "prchk_reftag": false, 00:23:33.369 "prchk_guard": false, 00:23:33.369 "hdgst": false, 00:23:33.369 "ddgst": false, 00:23:33.369 "method": "bdev_nvme_attach_controller", 00:23:33.369 "req_id": 1 00:23:33.369 } 00:23:33.369 Got JSON-RPC error response 00:23:33.369 response: 00:23:33.369 { 00:23:33.369 "code": -5, 00:23:33.369 "message": "Input/output error" 00:23:33.369 } 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.369 request: 00:23:33.369 { 00:23:33.369 "name": "nvme0", 00:23:33.369 "trtype": "rdma", 00:23:33.369 "traddr": "192.168.100.8", 00:23:33.369 "adrfam": "ipv4", 00:23:33.369 "trsvcid": "4420", 00:23:33.369 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:33.369 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:33.369 "prchk_reftag": false, 00:23:33.369 "prchk_guard": false, 00:23:33.369 "hdgst": false, 00:23:33.369 "ddgst": false, 00:23:33.369 "dhchap_key": "key2", 00:23:33.369 "method": "bdev_nvme_attach_controller", 00:23:33.369 "req_id": 1 00:23:33.369 } 00:23:33.369 Got JSON-RPC error response 00:23:33.369 response: 00:23:33.369 { 00:23:33.369 "code": -5, 00:23:33.369 "message": "Input/output error" 00:23:33.369 } 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.369 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z rdma ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_FIRST_TARGET_IP ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_FIRST_TARGET_IP 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 192.168.100.8 ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 192.168.100.8 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -f ipv4 -a 192.168.100.8 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.628 request: 00:23:33.628 { 00:23:33.628 "name": "nvme0", 00:23:33.628 "trtype": "rdma", 00:23:33.628 "traddr": "192.168.100.8", 00:23:33.628 "adrfam": "ipv4", 00:23:33.628 "trsvcid": "4420", 00:23:33.628 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:33.628 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:33.628 "prchk_reftag": false, 00:23:33.628 "prchk_guard": false, 00:23:33.628 "hdgst": false, 00:23:33.628 "ddgst": false, 00:23:33.628 "dhchap_key": "key1", 00:23:33.628 "dhchap_ctrlr_key": "ckey2", 00:23:33.628 "method": "bdev_nvme_attach_controller", 00:23:33.628 "req_id": 1 00:23:33.628 } 00:23:33.628 Got JSON-RPC error response 00:23:33.628 response: 00:23:33.628 { 00:23:33.628 "code": -5, 00:23:33.628 "message": "Input/output error" 00:23:33.628 } 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:23:33.628 rmmod nvme_rdma 00:23:33.628 rmmod nvme_fabrics 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2648425 ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2648425 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2648425 ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2648425 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2648425 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2648425' 00:23:33.628 killing process with pid 2648425 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2648425 00:23:33.628 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2648425 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_rdma nvmet 00:23:33.887 10:12:18 nvmf_rdma.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:37.176 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:23:37.176 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:23:38.114 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:23:38.373 10:12:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.O3h /tmp/spdk.key-null.JaJ /tmp/spdk.key-sha256.rR1 /tmp/spdk.key-sha384.tmi /tmp/spdk.key-sha512.F4e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvme-auth.log 00:23:38.373 10:12:23 nvmf_rdma.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:23:40.908 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:40.908 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:23:40.908 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:23:41.169 00:23:41.169 real 0m53.687s 00:23:41.169 user 0m49.235s 00:23:41.169 sys 0m12.521s 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.169 ************************************ 00:23:41.169 END TEST nvmf_auth_host 00:23:41.169 ************************************ 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ rdma == \t\c\p ]] 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.169 ************************************ 00:23:41.169 START TEST nvmf_bdevperf 00:23:41.169 ************************************ 00:23:41.169 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:23:41.169 * Looking for test storage... 00:23:41.169 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:41.428 10:12:26 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:23:46.769 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:23:46.769 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:23:46.769 Found net devices under 0000:da:00.0: mlx_0_0 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:23:46.769 Found net devices under 0000:da:00.1: mlx_0_1 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@420 -- # rdma_device_init 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # uname 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@62 -- # modprobe ib_cm 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@63 -- # modprobe ib_core 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@64 -- # modprobe ib_umad 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@66 -- # modprobe iw_cm 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # allocate_nic_ips 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # get_rdma_if_list 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:23:46.769 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:23:46.770 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.770 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:23:46.770 altname enp218s0f0np0 00:23:46.770 altname ens818f0np0 00:23:46.770 inet 192.168.100.8/24 scope global mlx_0_0 00:23:46.770 valid_lft forever preferred_lft forever 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:23:46.770 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:46.770 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:23:46.770 altname enp218s0f1np1 00:23:46.770 altname ens818f1np1 00:23:46.770 inet 192.168.100.9/24 scope global mlx_0_1 00:23:46.770 valid_lft forever preferred_lft forever 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # get_rdma_if_list 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_0 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # echo mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # continue 2 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # awk '{print $4}' 00:23:46.770 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@113 -- # cut -d/ -f1 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:23:47.030 192.168.100.9' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:23:47.030 192.168.100.9' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # head -n 1 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:23:47.030 192.168.100.9' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # tail -n +2 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # head -n 1 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2662611 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2662611 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2662611 ']' 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:47.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:47.030 10:12:31 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.030 [2024-07-25 10:12:32.026620] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:47.030 [2024-07-25 10:12:32.026683] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:47.030 EAL: No free 2048 kB hugepages reported on node 1 00:23:47.030 [2024-07-25 10:12:32.096158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:47.030 [2024-07-25 10:12:32.170483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:47.030 [2024-07-25 10:12:32.170521] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:47.030 [2024-07-25 10:12:32.170527] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:47.030 [2024-07-25 10:12:32.170533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:47.030 [2024-07-25 10:12:32.170539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:47.030 [2024-07-25 10:12:32.170647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:47.030 [2024-07-25 10:12:32.170671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:47.030 [2024-07-25 10:12:32.170671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 [2024-07-25 10:12:32.890734] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x206e200/0x20726f0) succeed. 00:23:47.967 [2024-07-25 10:12:32.899634] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x206f7a0/0x20b3d80) succeed. 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.967 10:12:32 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 Malloc0 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:47.967 [2024-07-25 10:12:33.038872] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:23:47.967 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:47.968 { 00:23:47.968 "params": { 00:23:47.968 "name": "Nvme$subsystem", 00:23:47.968 "trtype": "$TEST_TRANSPORT", 00:23:47.968 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:47.968 "adrfam": "ipv4", 00:23:47.968 "trsvcid": "$NVMF_PORT", 00:23:47.968 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:47.968 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:47.968 "hdgst": ${hdgst:-false}, 00:23:47.968 "ddgst": ${ddgst:-false} 00:23:47.968 }, 00:23:47.968 "method": "bdev_nvme_attach_controller" 00:23:47.968 } 00:23:47.968 EOF 00:23:47.968 )") 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:47.968 10:12:33 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:47.968 "params": { 00:23:47.968 "name": "Nvme1", 00:23:47.968 "trtype": "rdma", 00:23:47.968 "traddr": "192.168.100.8", 00:23:47.968 "adrfam": "ipv4", 00:23:47.968 "trsvcid": "4420", 00:23:47.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:47.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:47.968 "hdgst": false, 00:23:47.968 "ddgst": false 00:23:47.968 }, 00:23:47.968 "method": "bdev_nvme_attach_controller" 00:23:47.968 }' 00:23:47.968 [2024-07-25 10:12:33.087138] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:47.968 [2024-07-25 10:12:33.087179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2662805 ] 00:23:47.968 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.227 [2024-07-25 10:12:33.155503] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.227 [2024-07-25 10:12:33.228804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.485 Running I/O for 1 seconds... 00:23:49.422 00:23:49.422 Latency(us) 00:23:49.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.422 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:49.422 Verification LBA range: start 0x0 length 0x4000 00:23:49.422 Nvme1n1 : 1.01 17824.37 69.63 0.00 0.00 7141.82 2574.63 12170.97 00:23:49.422 =================================================================================================================== 00:23:49.422 Total : 17824.37 69.63 0.00 0.00 7141.82 2574.63 12170.97 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2663042 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.681 { 00:23:49.681 "params": { 00:23:49.681 "name": "Nvme$subsystem", 00:23:49.681 "trtype": "$TEST_TRANSPORT", 00:23:49.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.681 "adrfam": "ipv4", 00:23:49.681 "trsvcid": "$NVMF_PORT", 00:23:49.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.681 "hdgst": ${hdgst:-false}, 00:23:49.681 "ddgst": ${ddgst:-false} 00:23:49.681 }, 00:23:49.681 "method": "bdev_nvme_attach_controller" 00:23:49.681 } 00:23:49.681 EOF 00:23:49.681 )") 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:23:49.681 10:12:34 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:49.681 "params": { 00:23:49.681 "name": "Nvme1", 00:23:49.681 "trtype": "rdma", 00:23:49.681 "traddr": "192.168.100.8", 00:23:49.681 "adrfam": "ipv4", 00:23:49.681 "trsvcid": "4420", 00:23:49.681 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.681 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:49.681 "hdgst": false, 00:23:49.681 "ddgst": false 00:23:49.681 }, 00:23:49.681 "method": "bdev_nvme_attach_controller" 00:23:49.681 }' 00:23:49.681 [2024-07-25 10:12:34.656852] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:49.681 [2024-07-25 10:12:34.656898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2663042 ] 00:23:49.681 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.681 [2024-07-25 10:12:34.724476] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:49.681 [2024-07-25 10:12:34.793354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:49.940 Running I/O for 15 seconds... 00:23:52.473 10:12:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2662611 00:23:52.473 10:12:37 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:23:53.853 [2024-07-25 10:12:38.651750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:118032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:118040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:118048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:118056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:118064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:118072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:118080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:118088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:118096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:118104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:118112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:118128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:118136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.651987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.651996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:118144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:118152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:118160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:118168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:118176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:118184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:118192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:118200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:118216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:118224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:118232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:118240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:118248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:118256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:118264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.853 [2024-07-25 10:12:38.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.853 [2024-07-25 10:12:38.652231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:118272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:118280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:118288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:118296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:118304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:118312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:118320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:118328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:118336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:118344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:118352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:118368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:118376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:118384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:118392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:118400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:118408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:118416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:118424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:118432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:118440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:118448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:118456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:118464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:118472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:118480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:118488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:118496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:118504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:118512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:118520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:118528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:118536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:118544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:118552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:118560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:118568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.854 [2024-07-25 10:12:38.652741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.854 [2024-07-25 10:12:38.652749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:118576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:118584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:118592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:118608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:118624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:118632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:118640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:118648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:118656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:118664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:118680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:118696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:118712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.652992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.652999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:118728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:118744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:118760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:118768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:118776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:53.855 [2024-07-25 10:12:38.653099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fe000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:117768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fc000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075fa000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f8000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:117792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f6000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:117800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f4000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f2000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075f0000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ee000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ec000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:117840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ea000 len:0x1000 key:0x184500 00:23:53.855 [2024-07-25 10:12:38.653287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.855 [2024-07-25 10:12:38.653295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:117848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e8000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e6000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:117864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e4000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:117872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e2000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075e0000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075de000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075dc000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075da000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:117912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d8000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:117920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d6000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d4000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:117936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d2000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075d0000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ce000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117960 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075cc000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075ca000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c8000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:117984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c6000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c4000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c2000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075c0000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.653598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000075be000 len:0x1000 key:0x184500 00:23:53.856 [2024-07-25 10:12:38.653604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:42cc8000 sqhd:52b0 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.655454] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:53.856 [2024-07-25 10:12:38.655486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:53.856 [2024-07-25 10:12:38.655507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118024 len:8 PRP1 0x0 PRP2 0x0 00:23:53.856 [2024-07-25 10:12:38.655529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:53.856 [2024-07-25 10:12:38.655599] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:23:53.856 [2024-07-25 10:12:38.658688] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:53.856 [2024-07-25 10:12:38.672496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:53.856 [2024-07-25 10:12:38.676115] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:53.856 [2024-07-25 10:12:38.676137] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:53.856 [2024-07-25 10:12:38.676144] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:54.793 [2024-07-25 10:12:39.680190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:54.793 [2024-07-25 10:12:39.680240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:54.793 [2024-07-25 10:12:39.680716] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:54.793 [2024-07-25 10:12:39.680725] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:54.793 [2024-07-25 10:12:39.680732] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:54.793 [2024-07-25 10:12:39.682686] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:54.793 [2024-07-25 10:12:39.683308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:54.793 [2024-07-25 10:12:39.695466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:54.793 [2024-07-25 10:12:39.698094] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:54.793 [2024-07-25 10:12:39.698111] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:54.793 [2024-07-25 10:12:39.698117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:55.729 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2662611 Killed "${NVMF_APP[@]}" "$@" 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2664122 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2664122 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2664122 ']' 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:55.729 10:12:40 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:55.729 [2024-07-25 10:12:40.675237] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:55.729 [2024-07-25 10:12:40.675284] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.729 EAL: No free 2048 kB hugepages reported on node 1 00:23:55.729 [2024-07-25 10:12:40.702101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:55.729 [2024-07-25 10:12:40.702134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:55.729 [2024-07-25 10:12:40.702311] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:55.729 [2024-07-25 10:12:40.702322] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:55.729 [2024-07-25 10:12:40.702329] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:55.729 [2024-07-25 10:12:40.705085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:55.729 [2024-07-25 10:12:40.708669] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:55.729 [2024-07-25 10:12:40.711091] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:23:55.729 [2024-07-25 10:12:40.711110] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:23:55.729 [2024-07-25 10:12:40.711117] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed040 00:23:55.729 [2024-07-25 10:12:40.741824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:55.729 [2024-07-25 10:12:40.820557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.729 [2024-07-25 10:12:40.820592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.729 [2024-07-25 10:12:40.820599] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.729 [2024-07-25 10:12:40.820606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.729 [2024-07-25 10:12:40.820611] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.729 [2024-07-25 10:12:40.820655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:55.729 [2024-07-25 10:12:40.820758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.729 [2024-07-25 10:12:40.820759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.665 [2024-07-25 10:12:41.553099] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdb3200/0xdb76f0) succeed. 00:23:56.665 [2024-07-25 10:12:41.561989] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdb47a0/0xdf8d80) succeed. 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.665 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.665 Malloc0 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:23:56.666 [2024-07-25 10:12:41.699063] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.666 10:12:41 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2663042 00:23:56.666 [2024-07-25 10:12:41.715215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:23:56.666 [2024-07-25 10:12:41.715239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:56.666 [2024-07-25 10:12:41.715417] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:56.666 [2024-07-25 10:12:41.715427] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:56.666 [2024-07-25 10:12:41.715435] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:56.666 [2024-07-25 10:12:41.715450] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:56.666 [2024-07-25 10:12:41.718201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:56.666 [2024-07-25 10:12:41.728423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:56.666 [2024-07-25 10:12:41.773150] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:06.642 00:24:06.642 Latency(us) 00:24:06.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:06.642 Verification LBA range: start 0x0 length 0x4000 00:24:06.642 Nvme1n1 : 15.01 12994.19 50.76 10305.96 0.00 5473.10 337.43 1038589.56 00:24:06.642 =================================================================================================================== 00:24:06.642 Total : 12994.19 50.76 10305.96 0.00 5473.10 337.43 1038589.56 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:06.642 rmmod nvme_rdma 00:24:06.642 rmmod nvme_fabrics 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2664122 ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2664122 ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2664122' 00:24:06.642 killing process with pid 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2664122 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:06.642 00:24:06.642 real 0m24.363s 00:24:06.642 user 1m4.265s 00:24:06.642 sys 0m5.276s 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:06.642 ************************************ 00:24:06.642 END TEST nvmf_bdevperf 00:24:06.642 ************************************ 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.642 ************************************ 00:24:06.642 START TEST nvmf_target_disconnect 00:24:06.642 ************************************ 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:24:06.642 * Looking for test storage... 00:24:06.642 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.642 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:06.643 10:12:50 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:11.934 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:11.934 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:11.935 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:11.935 Found net devices under 0000:da:00.0: mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:11.935 Found net devices under 0000:da:00.1: mlx_0_1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@420 -- # rdma_device_init 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # uname 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:11.935 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:11.935 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:24:11.935 altname enp218s0f0np0 00:24:11.935 altname ens818f0np0 00:24:11.935 inet 192.168.100.8/24 scope global mlx_0_0 00:24:11.935 valid_lft forever preferred_lft forever 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:11.935 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:11.935 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:24:11.935 altname enp218s0f1np1 00:24:11.935 altname ens818f1np1 00:24:11.935 inet 192.168.100.9/24 scope global mlx_0_1 00:24:11.935 valid_lft forever preferred_lft forever 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.935 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # continue 2 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:11.936 192.168.100.9' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:11.936 192.168.100.9' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # head -n 1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:11.936 192.168.100.9' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # tail -n +2 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # head -n 1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:11.936 ************************************ 00:24:11.936 START TEST nvmf_target_disconnect_tc1 00:24:11.936 ************************************ 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect ]] 00:24:11.936 10:12:56 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:11.936 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.936 [2024-07-25 10:12:56.580498] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:11.936 [2024-07-25 10:12:56.580582] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:11.936 [2024-07-25 10:12:56.580605] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d7040 00:24:12.504 [2024-07-25 10:12:57.584669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:12.504 [2024-07-25 10:12:57.584724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:12.504 [2024-07-25 10:12:57.584749] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:24:12.504 [2024-07-25 10:12:57.584800] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:12.504 [2024-07-25 10:12:57.584821] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:12.504 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:24:12.504 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:12.504 Initializing NVMe Controllers 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:12.504 00:24:12.504 real 0m1.128s 00:24:12.504 user 0m0.957s 00:24:12.504 sys 0m0.160s 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:12.504 ************************************ 00:24:12.504 END TEST nvmf_target_disconnect_tc1 00:24:12.504 ************************************ 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:12.504 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:12.764 ************************************ 00:24:12.764 START TEST nvmf_target_disconnect_tc2 00:24:12.764 ************************************ 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 192.168.100.8 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2669016 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2669016 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2669016 ']' 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.764 10:12:57 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:12.764 [2024-07-25 10:12:57.717657] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:12.764 [2024-07-25 10:12:57.717699] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.764 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.764 [2024-07-25 10:12:57.782646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:12.764 [2024-07-25 10:12:57.861320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.764 [2024-07-25 10:12:57.861355] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.764 [2024-07-25 10:12:57.861362] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.764 [2024-07-25 10:12:57.861368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.764 [2024-07-25 10:12:57.861373] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.764 [2024-07-25 10:12:57.861489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:12.764 [2024-07-25 10:12:57.861597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:12.764 [2024-07-25 10:12:57.861703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.764 [2024-07-25 10:12:57.861704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 Malloc0 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 [2024-07-25 10:12:58.612965] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xae5cf0/0xaf18c0) succeed. 00:24:13.701 [2024-07-25 10:12:58.622291] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xae7330/0xb32f50) succeed. 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 [2024-07-25 10:12:58.761829] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2669153 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:13.701 10:12:58 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:24:13.701 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.641 10:13:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2669016 00:24:15.641 10:13:00 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:17.018 Write completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Read completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Write completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Read completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Read completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Read completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Write completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Write completed with error (sct=0, sc=8) 00:24:17.018 starting I/O failed 00:24:17.018 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Write completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 Read completed with error (sct=0, sc=8) 00:24:17.019 starting I/O failed 00:24:17.019 [2024-07-25 10:13:01.952314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:17.956 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2669016 Killed "${NVMF_APP[@]}" "$@" 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 192.168.100.8 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2669847 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2669847 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2669847 ']' 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.956 10:13:02 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:17.956 [2024-07-25 10:13:02.835615] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:17.956 [2024-07-25 10:13:02.835664] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.956 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.956 [2024-07-25 10:13:02.904167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Write completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 Read completed with error (sct=0, sc=8) 00:24:17.956 starting I/O failed 00:24:17.956 [2024-07-25 10:13:02.957290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:17.956 [2024-07-25 10:13:02.958943] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:17.956 [2024-07-25 10:13:02.958962] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:17.956 [2024-07-25 10:13:02.958968] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:17.956 [2024-07-25 10:13:02.977399] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.956 [2024-07-25 10:13:02.977429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.956 [2024-07-25 10:13:02.977436] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.956 [2024-07-25 10:13:02.977442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.956 [2024-07-25 10:13:02.977447] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.956 [2024-07-25 10:13:02.977556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:17.956 [2024-07-25 10:13:02.977669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:17.956 [2024-07-25 10:13:02.977773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:17.956 [2024-07-25 10:13:02.977774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.524 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 Malloc0 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 [2024-07-25 10:13:03.722343] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x568cf0/0x5748c0) succeed. 00:24:18.782 [2024-07-25 10:13:03.731860] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x56a330/0x5b5f50) succeed. 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 [2024-07-25 10:13:03.872279] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.782 10:13:03 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2669153 00:24:19.041 [2024-07-25 10:13:03.962851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:19.041 qpair failed and we were unable to recover it. 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Write completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 Read completed with error (sct=0, sc=8) 00:24:19.978 starting I/O failed 00:24:19.978 [2024-07-25 10:13:04.967874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 [2024-07-25 10:13:04.979502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:04.979556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:04.979575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:04.979583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:04.979589] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:04.989644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:04.999471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:04.999510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:04.999525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:04.999532] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:04.999539] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.009892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.019605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.019647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.019662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.019673] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.019679] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.029949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.039448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.039487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.039502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.039509] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.039515] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.049944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.059536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.059583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.059598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.059605] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.059610] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.069895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.079632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.079669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.079684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.079691] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.079697] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.089997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.099565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.099603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.099617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.099625] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.099631] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.110057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:19.978 [2024-07-25 10:13:05.119688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:19.978 [2024-07-25 10:13:05.119724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:19.978 [2024-07-25 10:13:05.119739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:19.978 [2024-07-25 10:13:05.119746] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:19.978 [2024-07-25 10:13:05.119752] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:19.978 [2024-07-25 10:13:05.130085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:19.978 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.139755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.139796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.139810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.139817] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.139823] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.150152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.159814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.159852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.159866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.159873] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.159879] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.170211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.179832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.179869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.179884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.179891] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.179897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.190148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.199859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.199897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.199914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.199921] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.199927] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.210170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.219960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.219996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.220010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.220017] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.220023] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.230439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.240000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.240039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.240053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.240060] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.240066] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.250496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.259978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.260017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.260031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.260038] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.260044] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.238 [2024-07-25 10:13:05.270577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.238 qpair failed and we were unable to recover it. 00:24:20.238 [2024-07-25 10:13:05.280110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.238 [2024-07-25 10:13:05.280152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.238 [2024-07-25 10:13:05.280167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.238 [2024-07-25 10:13:05.280174] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.238 [2024-07-25 10:13:05.280183] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.290540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-25 10:13:05.300063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.239 [2024-07-25 10:13:05.300099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.239 [2024-07-25 10:13:05.300113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.239 [2024-07-25 10:13:05.300120] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.239 [2024-07-25 10:13:05.300132] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.310586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-25 10:13:05.320245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.239 [2024-07-25 10:13:05.320283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.239 [2024-07-25 10:13:05.320298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.239 [2024-07-25 10:13:05.320305] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.239 [2024-07-25 10:13:05.320311] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.330672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-25 10:13:05.340330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.239 [2024-07-25 10:13:05.340372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.239 [2024-07-25 10:13:05.340387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.239 [2024-07-25 10:13:05.340394] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.239 [2024-07-25 10:13:05.340400] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.350709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-25 10:13:05.360376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.239 [2024-07-25 10:13:05.360414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.239 [2024-07-25 10:13:05.360429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.239 [2024-07-25 10:13:05.360436] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.239 [2024-07-25 10:13:05.360442] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.370861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.239 [2024-07-25 10:13:05.380461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.239 [2024-07-25 10:13:05.380501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.239 [2024-07-25 10:13:05.380516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.239 [2024-07-25 10:13:05.380523] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.239 [2024-07-25 10:13:05.380529] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.239 [2024-07-25 10:13:05.390983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.239 qpair failed and we were unable to recover it. 00:24:20.498 [2024-07-25 10:13:05.400475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.498 [2024-07-25 10:13:05.400516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.498 [2024-07-25 10:13:05.400531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.498 [2024-07-25 10:13:05.400538] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.498 [2024-07-25 10:13:05.400544] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.498 [2024-07-25 10:13:05.410980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.498 qpair failed and we were unable to recover it. 00:24:20.498 [2024-07-25 10:13:05.420606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.498 [2024-07-25 10:13:05.420643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.498 [2024-07-25 10:13:05.420658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.498 [2024-07-25 10:13:05.420665] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.498 [2024-07-25 10:13:05.420671] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.498 [2024-07-25 10:13:05.431016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.498 qpair failed and we were unable to recover it. 00:24:20.498 [2024-07-25 10:13:05.440732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.440768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.440783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.440790] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.440796] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.451071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.460736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.460776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.460790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.460803] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.460809] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.471104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.480755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.480794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.480808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.480815] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.480821] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.491097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.500661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.500695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.500710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.500717] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.500722] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.511221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.520839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.520876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.520891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.520897] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.520903] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.531367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.540869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.540911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.540925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.540932] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.540938] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.551368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.560960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.560997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.561012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.561019] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.561025] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.571343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.581091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.581135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.581149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.581156] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.581162] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.591415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.601117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.601158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.601174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.601181] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.601187] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.611566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.621192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.621231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.621245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.621253] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.621259] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.631558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.499 [2024-07-25 10:13:05.641232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.499 [2024-07-25 10:13:05.641271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.499 [2024-07-25 10:13:05.641289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.499 [2024-07-25 10:13:05.641296] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.499 [2024-07-25 10:13:05.641302] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.499 [2024-07-25 10:13:05.651683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.499 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.661207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.661246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.661260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.661267] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.661273] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.671813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.681433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.681473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.681487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.681495] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.681500] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.691861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.701397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.701437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.701451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.701458] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.701464] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.711818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.721469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.721511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.721528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.721535] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.721544] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.731752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.741528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.741568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.741583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.741590] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.741596] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.751896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.759 [2024-07-25 10:13:05.761474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.759 [2024-07-25 10:13:05.761514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.759 [2024-07-25 10:13:05.761529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.759 [2024-07-25 10:13:05.761536] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.759 [2024-07-25 10:13:05.761542] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.759 [2024-07-25 10:13:05.771839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.759 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.781685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.781726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.781740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.781747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.781753] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.791940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.801672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.801716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.801730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.801737] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.801743] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.812008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.821683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.821714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.821728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.821735] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.821741] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.832070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.841762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.841800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.841815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.841822] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.841828] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.852144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.861820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.861867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.861881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.861888] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.861894] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.872229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.881854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.881892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.881907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.881914] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.881921] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.892355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:20.760 [2024-07-25 10:13:05.902002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:20.760 [2024-07-25 10:13:05.902040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:20.760 [2024-07-25 10:13:05.902055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:20.760 [2024-07-25 10:13:05.902065] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:20.760 [2024-07-25 10:13:05.902071] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:20.760 [2024-07-25 10:13:05.912350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.760 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:05.921999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:05.922033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:05.922048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:05.922055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:05.922061] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:05.932453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:05.942168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:05.942210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:05.942224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:05.942231] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:05.942237] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:05.952412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:05.962109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:05.962155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:05.962170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:05.962177] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:05.962183] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:05.972512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:05.982343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:05.982377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:05.982391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:05.982398] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:05.982404] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:05.992558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.002265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.002303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.002317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.002324] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.002331] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.012644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.022399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.022436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.022452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.022460] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.022466] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.032559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.042357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.042394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.042409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.042416] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.042422] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.052692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.062380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.062416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.062430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.062437] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.062443] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.072686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.082443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.082480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.082498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.082505] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.082511] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.092721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.102490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.102534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.102549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.102556] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.102562] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.112982] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.122525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.122560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.122574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.122581] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.122588] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.133067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.142684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.142719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.142733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.142741] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.020 [2024-07-25 10:13:06.142747] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.020 [2024-07-25 10:13:06.153009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.020 qpair failed and we were unable to recover it. 00:24:21.020 [2024-07-25 10:13:06.162629] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.020 [2024-07-25 10:13:06.162666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.020 [2024-07-25 10:13:06.162681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.020 [2024-07-25 10:13:06.162688] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.021 [2024-07-25 10:13:06.162697] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.021 [2024-07-25 10:13:06.173004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.021 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.182876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.182921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.182936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.182943] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.182948] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.193149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.202798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.202837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.202851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.202858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.202864] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.213352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.222974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.223010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.223032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.223039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.223045] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.233412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.242920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.242959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.242973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.242980] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.242986] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.253236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.263063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.263098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.263113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.263119] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.263126] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.273479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.283193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.283226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.283241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.283248] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.283254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.280 [2024-07-25 10:13:06.293290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.280 qpair failed and we were unable to recover it. 00:24:21.280 [2024-07-25 10:13:06.303199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.280 [2024-07-25 10:13:06.303237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.280 [2024-07-25 10:13:06.303251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.280 [2024-07-25 10:13:06.303257] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.280 [2024-07-25 10:13:06.303264] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.313562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.323327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.323365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.323379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.323386] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.323392] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.333496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.343411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.343453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.343468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.343477] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.343483] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.353628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.363368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.363407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.363421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.363429] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.363435] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.373637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.383526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.383558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.383572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.383579] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.383585] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.393866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.403551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.403588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.403603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.403610] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.403616] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.413901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.281 [2024-07-25 10:13:06.423658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.281 [2024-07-25 10:13:06.423700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.281 [2024-07-25 10:13:06.423714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.281 [2024-07-25 10:13:06.423722] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.281 [2024-07-25 10:13:06.423728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.281 [2024-07-25 10:13:06.434064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.281 qpair failed and we were unable to recover it. 00:24:21.540 [2024-07-25 10:13:06.443666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.443703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.443718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.443724] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.443731] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.454072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.463698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.463732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.463746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.463753] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.463759] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.474046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.483683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.483720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.483734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.483741] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.483747] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.493979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.503850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.503893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.503907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.503914] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.503920] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.514199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.523831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.523868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.523886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.523893] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.523899] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.534232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.543868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.543906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.543921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.543928] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.543934] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.554230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.563964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.564002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.564016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.564023] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.564029] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.574425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.584028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.584068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.584083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.584090] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.584096] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.594350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.604124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.604172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.604188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.604196] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.604206] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.614353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.624153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.624187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.624202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.624209] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.624215] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.634583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.644257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.644296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.644311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.644318] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.644324] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.654471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.664236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.664276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.664291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.541 [2024-07-25 10:13:06.664298] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.541 [2024-07-25 10:13:06.664304] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.541 [2024-07-25 10:13:06.674486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.541 qpair failed and we were unable to recover it. 00:24:21.541 [2024-07-25 10:13:06.684443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.541 [2024-07-25 10:13:06.684483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.541 [2024-07-25 10:13:06.684497] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.542 [2024-07-25 10:13:06.684504] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.542 [2024-07-25 10:13:06.684510] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.542 [2024-07-25 10:13:06.694773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.542 qpair failed and we were unable to recover it. 00:24:21.801 [2024-07-25 10:13:06.704739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.801 [2024-07-25 10:13:06.704777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.801 [2024-07-25 10:13:06.704792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.801 [2024-07-25 10:13:06.704799] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.801 [2024-07-25 10:13:06.704805] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.801 [2024-07-25 10:13:06.714810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.801 qpair failed and we were unable to recover it. 00:24:21.801 [2024-07-25 10:13:06.724587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.801 [2024-07-25 10:13:06.724625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.801 [2024-07-25 10:13:06.724639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.801 [2024-07-25 10:13:06.724645] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.801 [2024-07-25 10:13:06.724651] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.801 [2024-07-25 10:13:06.734800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.801 qpair failed and we were unable to recover it. 00:24:21.801 [2024-07-25 10:13:06.744470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.801 [2024-07-25 10:13:06.744510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.801 [2024-07-25 10:13:06.744524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.801 [2024-07-25 10:13:06.744531] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.801 [2024-07-25 10:13:06.744537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.801 [2024-07-25 10:13:06.754977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.801 qpair failed and we were unable to recover it. 00:24:21.801 [2024-07-25 10:13:06.764558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.801 [2024-07-25 10:13:06.764599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.801 [2024-07-25 10:13:06.764613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.801 [2024-07-25 10:13:06.764620] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.801 [2024-07-25 10:13:06.764626] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.801 [2024-07-25 10:13:06.774928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.801 qpair failed and we were unable to recover it. 00:24:21.801 [2024-07-25 10:13:06.784680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.801 [2024-07-25 10:13:06.784720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.801 [2024-07-25 10:13:06.784736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.801 [2024-07-25 10:13:06.784749] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.801 [2024-07-25 10:13:06.784757] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.795113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.804735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.804774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.804788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.804796] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.804802] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.814969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.824707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.824754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.824768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.824775] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.824781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.835222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.844790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.844827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.844841] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.844848] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.844855] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.855141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.864901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.864934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.864948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.864955] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.864961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.875187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.884881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.884918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.884933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.884940] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.884946] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.895367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.904980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.905019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.905033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.905040] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.905046] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.915506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.924994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.925033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.925048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.925055] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.925061] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.935375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:21.802 [2024-07-25 10:13:06.945002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:21.802 [2024-07-25 10:13:06.945040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:21.802 [2024-07-25 10:13:06.945054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:21.802 [2024-07-25 10:13:06.945061] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:21.802 [2024-07-25 10:13:06.945067] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:21.802 [2024-07-25 10:13:06.955716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:21.802 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:06.965161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:06.965204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:06.965221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:06.965228] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:06.965234] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:06.975676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:06.985176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:06.985218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:06.985232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:06.985239] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:06.985245] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:06.995594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.005188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.005227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.005241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.005248] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.005254] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.015724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.025273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.025313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.025327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.025334] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.025340] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.035732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.045320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.045359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.045373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.045380] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.045389] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.055741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.065471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.065513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.065527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.065534] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.065540] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.075770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.085558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.085600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.085616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.085624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.085630] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.096065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.105601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.105641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.105656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.105662] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.105668] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.115851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.125643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.125681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.125695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.125702] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.125708] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.136108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.145701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.145739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.145754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.145761] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.062 [2024-07-25 10:13:07.145767] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.062 [2024-07-25 10:13:07.156180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.062 qpair failed and we were unable to recover it. 00:24:22.062 [2024-07-25 10:13:07.165685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.062 [2024-07-25 10:13:07.165726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.062 [2024-07-25 10:13:07.165740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.062 [2024-07-25 10:13:07.165747] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.063 [2024-07-25 10:13:07.165754] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.063 [2024-07-25 10:13:07.176150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.063 qpair failed and we were unable to recover it. 00:24:22.063 [2024-07-25 10:13:07.185858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.063 [2024-07-25 10:13:07.185893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.063 [2024-07-25 10:13:07.185907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.063 [2024-07-25 10:13:07.185914] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.063 [2024-07-25 10:13:07.185920] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.063 [2024-07-25 10:13:07.196306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.063 qpair failed and we were unable to recover it. 00:24:22.063 [2024-07-25 10:13:07.205845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.063 [2024-07-25 10:13:07.205885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.063 [2024-07-25 10:13:07.205899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.063 [2024-07-25 10:13:07.205906] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.063 [2024-07-25 10:13:07.205912] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.063 [2024-07-25 10:13:07.216390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.063 qpair failed and we were unable to recover it. 00:24:22.322 [2024-07-25 10:13:07.225887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.322 [2024-07-25 10:13:07.225929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.322 [2024-07-25 10:13:07.225945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.322 [2024-07-25 10:13:07.225955] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.322 [2024-07-25 10:13:07.225961] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.322 [2024-07-25 10:13:07.236488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.322 qpair failed and we were unable to recover it. 00:24:22.322 [2024-07-25 10:13:07.245984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.322 [2024-07-25 10:13:07.246018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.322 [2024-07-25 10:13:07.246032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.322 [2024-07-25 10:13:07.246039] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.322 [2024-07-25 10:13:07.246046] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.322 [2024-07-25 10:13:07.256425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.322 qpair failed and we were unable to recover it. 00:24:22.322 [2024-07-25 10:13:07.265939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.322 [2024-07-25 10:13:07.265971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.322 [2024-07-25 10:13:07.265986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.322 [2024-07-25 10:13:07.265993] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.322 [2024-07-25 10:13:07.265999] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.322 [2024-07-25 10:13:07.276425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.322 qpair failed and we were unable to recover it. 00:24:22.322 [2024-07-25 10:13:07.286072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.322 [2024-07-25 10:13:07.286111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.322 [2024-07-25 10:13:07.286125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.322 [2024-07-25 10:13:07.286137] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.322 [2024-07-25 10:13:07.286143] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.322 [2024-07-25 10:13:07.296568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.322 qpair failed and we were unable to recover it. 00:24:22.322 [2024-07-25 10:13:07.306152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.322 [2024-07-25 10:13:07.306193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.322 [2024-07-25 10:13:07.306207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.322 [2024-07-25 10:13:07.306214] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.306220] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.316596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.326211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.326249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.326264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.326271] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.326277] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.336723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.346286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.346326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.346340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.346347] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.346353] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.356833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.366433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.366471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.366485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.366492] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.366498] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.376871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.386545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.386586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.386600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.386607] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.386614] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.396794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.406299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.406335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.406352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.406359] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.406365] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.416827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.426481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.426520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.426535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.426542] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.426548] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.437085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.446630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.446666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.446680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.446687] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.446693] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.457081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.323 [2024-07-25 10:13:07.466708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.323 [2024-07-25 10:13:07.466745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.323 [2024-07-25 10:13:07.466760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.323 [2024-07-25 10:13:07.466767] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.323 [2024-07-25 10:13:07.466773] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.323 [2024-07-25 10:13:07.477064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.323 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.486666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.486704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.486718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.486725] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.486734] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.497131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.506913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.506950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.506965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.506972] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.506978] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.516939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.526834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.526872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.526887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.526894] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.526900] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.537369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.546862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.546899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.546913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.546920] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.546926] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.557334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.566835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.566876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.566890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.566897] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.566903] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.577193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.586928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.586960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.586975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.586982] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.586988] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.597352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.607004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.607044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.607059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.607067] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.607073] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.617526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.627084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.627125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.627145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.627152] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.627159] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.637674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.647220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.583 [2024-07-25 10:13:07.647260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.583 [2024-07-25 10:13:07.647275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.583 [2024-07-25 10:13:07.647282] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.583 [2024-07-25 10:13:07.647288] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.583 [2024-07-25 10:13:07.657670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.583 qpair failed and we were unable to recover it. 00:24:22.583 [2024-07-25 10:13:07.667148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.584 [2024-07-25 10:13:07.667190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.584 [2024-07-25 10:13:07.667204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.584 [2024-07-25 10:13:07.667215] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.584 [2024-07-25 10:13:07.667221] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.584 [2024-07-25 10:13:07.677720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.584 qpair failed and we were unable to recover it. 00:24:22.584 [2024-07-25 10:13:07.687255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.584 [2024-07-25 10:13:07.687294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.584 [2024-07-25 10:13:07.687308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.584 [2024-07-25 10:13:07.687315] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.584 [2024-07-25 10:13:07.687321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.584 [2024-07-25 10:13:07.697816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.584 qpair failed and we were unable to recover it. 00:24:22.584 [2024-07-25 10:13:07.707245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.584 [2024-07-25 10:13:07.707288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.584 [2024-07-25 10:13:07.707303] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.584 [2024-07-25 10:13:07.707309] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.584 [2024-07-25 10:13:07.707315] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.584 [2024-07-25 10:13:07.717660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.584 qpair failed and we were unable to recover it. 00:24:22.584 [2024-07-25 10:13:07.727342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.584 [2024-07-25 10:13:07.727381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.584 [2024-07-25 10:13:07.727397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.584 [2024-07-25 10:13:07.727404] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.584 [2024-07-25 10:13:07.727409] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.584 [2024-07-25 10:13:07.737798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.584 qpair failed and we were unable to recover it. 00:24:22.843 [2024-07-25 10:13:07.747481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.843 [2024-07-25 10:13:07.747517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.843 [2024-07-25 10:13:07.747531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.843 [2024-07-25 10:13:07.747538] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.843 [2024-07-25 10:13:07.747544] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.843 [2024-07-25 10:13:07.758022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.843 qpair failed and we were unable to recover it. 00:24:22.843 [2024-07-25 10:13:07.767451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.843 [2024-07-25 10:13:07.767487] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.843 [2024-07-25 10:13:07.767501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.843 [2024-07-25 10:13:07.767508] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.843 [2024-07-25 10:13:07.767514] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.843 [2024-07-25 10:13:07.777817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.843 qpair failed and we were unable to recover it. 00:24:22.843 [2024-07-25 10:13:07.787588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.843 [2024-07-25 10:13:07.787627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.843 [2024-07-25 10:13:07.787641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.843 [2024-07-25 10:13:07.787649] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.843 [2024-07-25 10:13:07.787655] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.843 [2024-07-25 10:13:07.797872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.843 qpair failed and we were unable to recover it. 00:24:22.843 [2024-07-25 10:13:07.807567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.843 [2024-07-25 10:13:07.807606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.843 [2024-07-25 10:13:07.807620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.807627] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.807633] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.818118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.827755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.827794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.827809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.827816] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.827822] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.837996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.847744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.847781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.847800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.847806] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.847812] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.858085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.867825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.867869] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.867884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.867891] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.867897] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.878141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.887864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.887902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.887916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.887923] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.887930] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.898214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.907938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.907975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.907989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.907996] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.908002] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.918348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.928029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.928070] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.928086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.928092] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.928101] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.938412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.948090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.948138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.948152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.948159] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.948165] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.958393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.968191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.968231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.968245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.968252] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.968258] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.978546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:22.844 [2024-07-25 10:13:07.988224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:22.844 [2024-07-25 10:13:07.988263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:22.844 [2024-07-25 10:13:07.988278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:22.844 [2024-07-25 10:13:07.988284] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:22.844 [2024-07-25 10:13:07.988291] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:22.844 [2024-07-25 10:13:07.998548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:22.844 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.008182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.008221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.008235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.008241] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.008247] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.018577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.028385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.028424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.028439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.028446] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.028451] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.038707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.048341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.048380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.048395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.048402] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.048408] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.058694] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.068523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.068561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.068576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.068583] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.068590] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.078870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.088542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.088582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.088596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.088603] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.088608] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.098683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.108534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.108570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.108584] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.108596] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.108602] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.118764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.128666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.128701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.128715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.128722] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.128728] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.138853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.148774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.148813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.148827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.148834] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.148840] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.159024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.168701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.168741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.168765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.168772] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.168778] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.178925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.188769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.188811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.188825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.188832] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.188837] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.198957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.208822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.208863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.208878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.208885] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.208891] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.219191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.228867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.104 [2024-07-25 10:13:08.228905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.104 [2024-07-25 10:13:08.228920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.104 [2024-07-25 10:13:08.228927] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.104 [2024-07-25 10:13:08.228933] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.104 [2024-07-25 10:13:08.239170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.104 qpair failed and we were unable to recover it. 00:24:23.104 [2024-07-25 10:13:08.248937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.105 [2024-07-25 10:13:08.248976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.105 [2024-07-25 10:13:08.248991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.105 [2024-07-25 10:13:08.248998] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.105 [2024-07-25 10:13:08.249003] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.105 [2024-07-25 10:13:08.259283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.105 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.268889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.268927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.364 [2024-07-25 10:13:08.268941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.364 [2024-07-25 10:13:08.268947] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.364 [2024-07-25 10:13:08.268954] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.364 [2024-07-25 10:13:08.279229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.364 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.288990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.289028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.364 [2024-07-25 10:13:08.289046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.364 [2024-07-25 10:13:08.289053] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.364 [2024-07-25 10:13:08.289059] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.364 [2024-07-25 10:13:08.299294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.364 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.309087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.309133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.364 [2024-07-25 10:13:08.309147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.364 [2024-07-25 10:13:08.309154] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.364 [2024-07-25 10:13:08.309160] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.364 [2024-07-25 10:13:08.319455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.364 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.329121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.329166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.364 [2024-07-25 10:13:08.329180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.364 [2024-07-25 10:13:08.329187] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.364 [2024-07-25 10:13:08.329193] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.364 [2024-07-25 10:13:08.339335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.364 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.349210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.349250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.364 [2024-07-25 10:13:08.349264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.364 [2024-07-25 10:13:08.349271] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.364 [2024-07-25 10:13:08.349277] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.364 [2024-07-25 10:13:08.359534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.364 qpair failed and we were unable to recover it. 00:24:23.364 [2024-07-25 10:13:08.369250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.364 [2024-07-25 10:13:08.369290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.369304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.369312] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.369321] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.379687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.389299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.389331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.389346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.389353] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.389359] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.399686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.409494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.409534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.409549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.409557] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.409563] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.419823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.429360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.429398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.429414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.429420] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.429426] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.439882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.449341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.449374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.449388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.449395] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.449402] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.459838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.469514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.469554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.469568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.469575] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.469581] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.479894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.489508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.489549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.489563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.489570] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.489577] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.500078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.365 [2024-07-25 10:13:08.509566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.365 [2024-07-25 10:13:08.509603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.365 [2024-07-25 10:13:08.509617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.365 [2024-07-25 10:13:08.509624] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.365 [2024-07-25 10:13:08.509631] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.365 [2024-07-25 10:13:08.520066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.365 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.529625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.529661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.529676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.529683] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.529689] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.539958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.549795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.549834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.549849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.549858] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.549865] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.560109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.569838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.569874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.569888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.569895] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.569901] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.580303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.589868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.589905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.589919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.589926] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.589932] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.600348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.609969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.610009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.610024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.610032] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.610038] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.620310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.629927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.629959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.629975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.629982] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.629988] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.640369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.649966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.650004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.650019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.650026] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.650032] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.660540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.670067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.670109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.670124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.670135] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.670142] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.680696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.690179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.690217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.690233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.690240] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.690246] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.700781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.710257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.710291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.710305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.710312] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.710318] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.720661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.730249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.730286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.625 [2024-07-25 10:13:08.730304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.625 [2024-07-25 10:13:08.730311] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.625 [2024-07-25 10:13:08.730317] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.625 [2024-07-25 10:13:08.740789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.625 qpair failed and we were unable to recover it. 00:24:23.625 [2024-07-25 10:13:08.750521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.625 [2024-07-25 10:13:08.750563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.626 [2024-07-25 10:13:08.750578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.626 [2024-07-25 10:13:08.750584] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.626 [2024-07-25 10:13:08.750591] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.626 [2024-07-25 10:13:08.760887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.626 [2024-07-25 10:13:08.770494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.626 [2024-07-25 10:13:08.770538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.626 [2024-07-25 10:13:08.770553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.626 [2024-07-25 10:13:08.770560] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.626 [2024-07-25 10:13:08.770566] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.626 [2024-07-25 10:13:08.780956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.626 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.790542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.790582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.790596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.790604] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.790609] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.800886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.810467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.810505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.810519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.810526] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.810537] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.820958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.830689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.830733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.830747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.830754] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.830760] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.840886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.850717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.850752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.850767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.850774] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.850781] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.861154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.870815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.870848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.870863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.870870] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.870876] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.881144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.890856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.890896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.890910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.890917] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.890923] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.901343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.910810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.910858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.910873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.910880] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.910886] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.921341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.930901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.930934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.930949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.930956] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.930962] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.941448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.950880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.950917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.950932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.950939] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.950945] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.961384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.970957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.970999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.971014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.971021] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.971026] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:08.981631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:08.991091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:08.991136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:08.991150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:08.991160] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:08.991166] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:09.001349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:23.885 [2024-07-25 10:13:09.011173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:23.885 [2024-07-25 10:13:09.011212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:23.885 [2024-07-25 10:13:09.011226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:23.885 [2024-07-25 10:13:09.011233] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:23.885 [2024-07-25 10:13:09.011239] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:23.885 [2024-07-25 10:13:09.021697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:23.885 qpair failed and we were unable to recover it. 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Write completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 Read completed with error (sct=0, sc=8) 00:24:25.262 starting I/O failed 00:24:25.262 [2024-07-25 10:13:10.027215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.262 [2024-07-25 10:13:10.033961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.262 [2024-07-25 10:13:10.034008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.262 [2024-07-25 10:13:10.034025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.262 [2024-07-25 10:13:10.034033] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.262 [2024-07-25 10:13:10.034042] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:24:25.262 [2024-07-25 10:13:10.044652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.262 qpair failed and we were unable to recover it. 00:24:25.262 [2024-07-25 10:13:10.054231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.262 [2024-07-25 10:13:10.054274] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.262 [2024-07-25 10:13:10.054290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.262 [2024-07-25 10:13:10.054297] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.262 [2024-07-25 10:13:10.054303] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002cef80 00:24:25.262 [2024-07-25 10:13:10.064616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:25.262 qpair failed and we were unable to recover it. 00:24:25.262 [2024-07-25 10:13:10.074541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.262 [2024-07-25 10:13:10.074592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.262 [2024-07-25 10:13:10.074624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.262 [2024-07-25 10:13:10.074639] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.262 [2024-07-25 10:13:10.074651] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:25.262 [2024-07-25 10:13:10.084800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.262 qpair failed and we were unable to recover it. 00:24:25.262 [2024-07-25 10:13:10.094436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:24:25.263 [2024-07-25 10:13:10.094472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:24:25.263 [2024-07-25 10:13:10.094490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:24:25.263 [2024-07-25 10:13:10.094498] nvme_rdma.c:1328:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:24:25.263 [2024-07-25 10:13:10.094505] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf800 00:24:25.263 [2024-07-25 10:13:10.104661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:25.263 qpair failed and we were unable to recover it. 00:24:25.263 [2024-07-25 10:13:10.104784] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:25.263 A controller has encountered a failure and is being reset. 00:24:25.263 [2024-07-25 10:13:10.104898] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:25.263 [2024-07-25 10:13:10.138514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:25.263 Controller properly reset. 00:24:25.263 Initializing NVMe Controllers 00:24:25.263 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.263 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:25.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:25.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:25.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:25.263 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:25.263 Initialization complete. Launching workers. 00:24:25.263 Starting thread on core 1 00:24:25.263 Starting thread on core 2 00:24:25.263 Starting thread on core 3 00:24:25.263 Starting thread on core 0 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:24:25.263 00:24:25.263 real 0m12.539s 00:24:25.263 user 0m28.247s 00:24:25.263 sys 0m2.165s 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:25.263 ************************************ 00:24:25.263 END TEST nvmf_target_disconnect_tc2 00:24:25.263 ************************************ 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n 192.168.100.9 ']' 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@73 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:25.263 ************************************ 00:24:25.263 START TEST nvmf_target_disconnect_tc3 00:24:25.263 ************************************ 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc3 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@57 -- # reconnectpid=2671051 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@59 -- # sleep 2 00:24:25.263 10:13:10 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:24:25.263 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.164 10:13:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@60 -- # kill -9 2669847 00:24:27.164 10:13:12 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@62 -- # sleep 2 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Read completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 Write completed with error (sct=0, sc=8) 00:24:28.538 starting I/O failed 00:24:28.538 [2024-07-25 10:13:13.461599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:28.538 [2024-07-25 10:13:13.463497] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:28.538 [2024-07-25 10:13:13.463543] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:28.538 [2024-07-25 10:13:13.463563] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c40 00:24:29.530 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 54: 2669847 Killed "${NVMF_APP[@]}" "$@" 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@63 -- # disconnect_init 192.168.100.9 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2671725 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2671725 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@480 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2671725 ']' 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:29.530 10:13:14 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:29.530 [2024-07-25 10:13:14.333609] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:29.530 [2024-07-25 10:13:14.333655] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.530 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.530 [2024-07-25 10:13:14.403739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:29.530 [2024-07-25 10:13:14.467463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:29.530 qpair failed and we were unable to recover it. 00:24:29.530 [2024-07-25 10:13:14.474222] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.530 [2024-07-25 10:13:14.474255] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.530 [2024-07-25 10:13:14.474262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.530 [2024-07-25 10:13:14.474269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.530 [2024-07-25 10:13:14.474273] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.530 [2024-07-25 10:13:14.474389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:29.530 [2024-07-25 10:13:14.474499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:29.530 [2024-07-25 10:13:14.474606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:29.530 [2024-07-25 10:13:14.474607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@864 -- # return 0 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.098 Malloc0 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.098 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.098 [2024-07-25 10:13:15.224513] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1968cf0/0x19748c0) succeed. 00:24:30.098 [2024-07-25 10:13:15.234077] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x196a330/0x19b5f50) succeed. 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 [2024-07-25 10:13:15.375578] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.357 10:13:15 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@65 -- # wait 2671051 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Read completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 Write completed with error (sct=0, sc=8) 00:24:30.357 starting I/O failed 00:24:30.357 [2024-07-25 10:13:15.472500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:30.357 [2024-07-25 10:13:15.474055] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:30.357 [2024-07-25 10:13:15.474072] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:30.357 [2024-07-25 10:13:15.474082] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:31.733 [2024-07-25 10:13:16.477969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:31.733 qpair failed and we were unable to recover it. 00:24:31.733 [2024-07-25 10:13:16.479438] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:31.733 [2024-07-25 10:13:16.479452] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:31.733 [2024-07-25 10:13:16.479459] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:32.669 [2024-07-25 10:13:17.483412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:32.669 qpair failed and we were unable to recover it. 00:24:32.669 [2024-07-25 10:13:17.484830] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:32.669 [2024-07-25 10:13:17.484845] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:32.669 [2024-07-25 10:13:17.484851] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:33.604 [2024-07-25 10:13:18.488804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:33.604 qpair failed and we were unable to recover it. 00:24:33.604 [2024-07-25 10:13:18.490223] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:33.604 [2024-07-25 10:13:18.490237] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:33.604 [2024-07-25 10:13:18.490243] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:34.540 [2024-07-25 10:13:19.494113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:34.540 qpair failed and we were unable to recover it. 00:24:34.540 [2024-07-25 10:13:19.495637] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:34.540 [2024-07-25 10:13:19.495652] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:34.540 [2024-07-25 10:13:19.495658] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:35.476 [2024-07-25 10:13:20.499555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:35.476 qpair failed and we were unable to recover it. 00:24:35.476 [2024-07-25 10:13:20.501020] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:35.476 [2024-07-25 10:13:20.501040] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:35.477 [2024-07-25 10:13:20.501046] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:36.412 [2024-07-25 10:13:21.504720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:36.412 qpair failed and we were unable to recover it. 00:24:36.412 [2024-07-25 10:13:21.506137] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:36.412 [2024-07-25 10:13:21.506151] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:36.412 [2024-07-25 10:13:21.506156] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3000 00:24:37.790 [2024-07-25 10:13:22.509922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:37.790 qpair failed and we were unable to recover it. 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Read completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 Write completed with error (sct=0, sc=8) 00:24:38.364 starting I/O failed 00:24:38.364 [2024-07-25 10:13:23.515083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:38.364 [2024-07-25 10:13:23.516595] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:38.364 [2024-07-25 10:13:23.516611] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:38.364 [2024-07-25 10:13:23.516617] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:24:39.742 [2024-07-25 10:13:24.520466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:39.742 qpair failed and we were unable to recover it. 00:24:39.742 [2024-07-25 10:13:24.522077] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:39.742 [2024-07-25 10:13:24.522093] nvme_rdma.c:1087:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:39.742 [2024-07-25 10:13:24.522099] nvme_rdma.c:2674:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d0180 00:24:40.676 [2024-07-25 10:13:25.525890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:40.676 qpair failed and we were unable to recover it. 00:24:40.676 [2024-07-25 10:13:25.526021] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:24:40.676 A controller has encountered a failure and is being reset. 00:24:40.676 Resorting to new failover address 192.168.100.9 00:24:40.676 [2024-07-25 10:13:25.526106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.676 [2024-07-25 10:13:25.526181] nvme_rdma.c: 541:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:24:40.676 [2024-07-25 10:13:25.528077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:40.676 Controller properly reset. 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Write completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 Read completed with error (sct=0, sc=8) 00:24:41.611 starting I/O failed 00:24:41.611 [2024-07-25 10:13:26.575375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.611 Initializing NVMe Controllers 00:24:41.611 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.611 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:24:41.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:24:41.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:24:41.611 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:24:41.611 Initialization complete. Launching workers. 00:24:41.611 Starting thread on core 1 00:24:41.611 Starting thread on core 2 00:24:41.611 Starting thread on core 3 00:24:41.611 Starting thread on core 0 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- host/target_disconnect.sh@66 -- # sync 00:24:41.611 00:24:41.611 real 0m16.361s 00:24:41.611 user 0m59.938s 00:24:41.611 sys 0m3.317s 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:41.611 ************************************ 00:24:41.611 END TEST nvmf_target_disconnect_tc3 00:24:41.611 ************************************ 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:24:41.611 rmmod nvme_rdma 00:24:41.611 rmmod nvme_fabrics 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2671725 ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2671725 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2671725 ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2671725 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2671725 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2671725' 00:24:41.611 killing process with pid 2671725 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2671725 00:24:41.611 10:13:26 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2671725 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:24:42.179 00:24:42.179 real 0m36.375s 00:24:42.179 user 2m20.525s 00:24:42.179 sys 0m10.400s 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:42.179 ************************************ 00:24:42.179 END TEST nvmf_target_disconnect 00:24:42.179 ************************************ 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:42.179 00:24:42.179 real 5m2.845s 00:24:42.179 user 12m36.028s 00:24:42.179 sys 1m21.245s 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.179 10:13:27 nvmf_rdma.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.179 ************************************ 00:24:42.179 END TEST nvmf_host 00:24:42.179 ************************************ 00:24:42.179 00:24:42.179 real 17m38.668s 00:24:42.179 user 43m39.322s 00:24:42.179 sys 4m25.250s 00:24:42.179 10:13:27 nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:42.179 10:13:27 nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:42.179 ************************************ 00:24:42.179 END TEST nvmf_rdma 00:24:42.179 ************************************ 00:24:42.179 10:13:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:42.179 10:13:27 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:42.179 10:13:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:42.179 10:13:27 -- common/autotest_common.sh@10 -- # set +x 00:24:42.179 ************************************ 00:24:42.179 START TEST spdkcli_nvmf_rdma 00:24:42.179 ************************************ 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:24:42.179 * Looking for test storage... 00:24:42.179 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # uname -s 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@18 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- paths/export.sh@5 -- # export PATH 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@47 -- # : 0 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:42.179 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2673929 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@34 -- # waitforlisten 2673929 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@831 -- # '[' -z 2673929 ']' 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:42.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:42.180 10:13:27 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:42.439 [2024-07-25 10:13:27.351107] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:42.439 [2024-07-25 10:13:27.351161] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2673929 ] 00:24:42.439 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.439 [2024-07-25 10:13:27.421374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:42.439 [2024-07-25 10:13:27.493950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.439 [2024-07-25 10:13:27.493950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@864 -- # return 0 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@441 -- # '[' -z rdma ']' 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.373 10:13:28 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # pci_devs=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # net_devs=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # e810=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@296 -- # local -ga e810 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # x722=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@297 -- # local -ga x722 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # mlx=() 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@298 -- # local -ga mlx 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@321 -- # [[ rdma == rdma ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@322 -- # pci_devs+=("${x722[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@323 -- # pci_devs+=("${mlx[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@327 -- # [[ mlx5 == mlx5 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@328 -- # pci_devs=("${mlx[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.0 (0x15b3 - 0x1015)' 00:24:48.643 Found 0000:da:00.0 (0x15b3 - 0x1015) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@341 -- # echo 'Found 0000:da:00.1 (0x15b3 - 0x1015)' 00:24:48.643 Found 0000:da:00.1 (0x15b3 - 0x1015) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@342 -- # [[ mlx5_core == unknown ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@346 -- # [[ mlx5_core == unbound ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@351 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@352 -- # [[ rdma == rdma ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@362 -- # NVME_CONNECT='nvme connect -i 15' 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@372 -- # [[ mlx5 == e810 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.0: mlx_0_0' 00:24:48.643 Found net devices under 0000:da:00.0: mlx_0_0 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@388 -- # [[ rdma == tcp ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:da:00.1: mlx_0_1' 00:24:48.643 Found net devices under 0000:da:00.1: mlx_0_1 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@414 -- # is_hw=yes 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@417 -- # [[ rdma == tcp ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@419 -- # [[ rdma == rdma ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@420 -- # rdma_device_init 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@501 -- # load_ib_rdma_modules 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # uname 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@58 -- # '[' Linux '!=' Linux ']' 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@62 -- # modprobe ib_cm 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@63 -- # modprobe ib_core 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@64 -- # modprobe ib_umad 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@65 -- # modprobe ib_uverbs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@66 -- # modprobe iw_cm 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@67 -- # modprobe rdma_cm 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@68 -- # modprobe rdma_ucm 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@502 -- # allocate_nic_ips 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@72 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # get_rdma_if_list 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:48.643 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.8 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.8 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_0 00:24:48.644 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:48.644 link/ether ec:0d:9a:8b:2d:9c brd ff:ff:ff:ff:ff:ff 00:24:48.644 altname enp218s0f0np0 00:24:48.644 altname ens818f0np0 00:24:48.644 inet 192.168.100.8/24 scope global mlx_0_0 00:24:48.644 valid_lft forever preferred_lft forever 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@73 -- # for nic_name in $(get_rdma_if_list) 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # get_ip_address mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@74 -- # ip=192.168.100.9 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@75 -- # [[ -z 192.168.100.9 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@81 -- # ip addr show mlx_0_1 00:24:48.644 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:48.644 link/ether ec:0d:9a:8b:2d:9d brd ff:ff:ff:ff:ff:ff 00:24:48.644 altname enp218s0f1np1 00:24:48.644 altname ens818f1np1 00:24:48.644 inet 192.168.100.9/24 scope global mlx_0_1 00:24:48.644 valid_lft forever preferred_lft forever 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@422 -- # return 0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@455 -- # [[ rdma == \r\d\m\a ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # get_available_rdma_ips 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # get_rdma_if_list 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@92 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # mapfile -t rxe_net_devs 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@94 -- # rxe_cfg rxe-net 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@96 -- # (( 2 == 0 )) 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@101 -- # for net_dev in "${net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@102 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@103 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@104 -- # echo mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@105 -- # continue 2 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_0 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@86 -- # for nic_name in $(get_rdma_if_list) 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@87 -- # get_ip_address mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@112 -- # interface=mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # ip -o -4 addr show mlx_0_1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # awk '{print $4}' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@113 -- # cut -d/ -f1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@456 -- # RDMA_IP_LIST='192.168.100.8 00:24:48.644 192.168.100.9' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # echo '192.168.100.8 00:24:48.644 192.168.100.9' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # head -n 1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@457 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # echo '192.168.100.8 00:24:48.644 192.168.100.9' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # tail -n +2 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # head -n 1 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@458 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@459 -- # '[' -z 192.168.100.8 ']' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@463 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == tcp ']' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@468 -- # '[' rdma == rdma ']' 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- nvmf/common.sh@474 -- # modprobe nvme-rdma 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:48.644 10:13:33 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:48.644 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:48.644 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:48.644 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:48.644 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:48.644 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:48.644 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:48.644 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:48.644 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:48.644 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:48.644 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:48.644 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:48.644 ' 00:24:51.192 [2024-07-25 10:13:36.248021] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x179dcf0/0x1624600) succeed. 00:24:51.192 [2024-07-25 10:13:36.257354] rdma.c:2581:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x179f380/0x170f6c0) succeed. 00:24:52.569 [2024-07-25 10:13:37.599277] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:24:55.102 [2024-07-25 10:13:39.986759] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:24:57.005 [2024-07-25 10:13:42.041505] rdma.c:3036:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:24:58.910 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:24:58.910 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:24:58.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:58.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:58.910 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:24:58.910 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:24:58.910 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@69 -- # check_match 00:24:58.910 10:13:43 spdkcli_nvmf_rdma -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:24:59.169 10:13:44 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:24:59.169 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:24:59.169 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:59.169 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:24:59.169 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:24:59.169 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:24:59.169 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:24:59.169 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:24:59.169 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:24:59.169 ' 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:25:04.440 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:25:04.440 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:04.440 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:04.440 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:04.440 10:13:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:04.440 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:04.440 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@90 -- # killprocess 2673929 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@950 -- # '[' -z 2673929 ']' 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@954 -- # kill -0 2673929 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # uname 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2673929 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2673929' 00:25:04.699 killing process with pid 2673929 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@969 -- # kill 2673929 00:25:04.699 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@974 -- # wait 2673929 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@117 -- # sync 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == tcp ']' 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@119 -- # '[' rdma == rdma ']' 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@120 -- # set +e 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@122 -- # modprobe -v -r nvme-rdma 00:25:04.958 rmmod nvme_rdma 00:25:04.958 rmmod nvme_fabrics 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@124 -- # set -e 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@125 -- # return 0 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- nvmf/common.sh@495 -- # [[ rdma == \t\c\p ]] 00:25:04.958 00:25:04.958 real 0m22.765s 00:25:04.958 user 0m49.870s 00:25:04.958 sys 0m4.994s 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:04.958 10:13:49 spdkcli_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:25:04.958 ************************************ 00:25:04.958 END TEST spdkcli_nvmf_rdma 00:25:04.958 ************************************ 00:25:04.958 10:13:49 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:25:04.958 10:13:49 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:25:04.958 10:13:49 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:25:04.958 10:13:49 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:25:04.958 10:13:49 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:25:04.958 10:13:49 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:25:04.958 10:13:49 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:25:04.958 10:13:49 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:04.958 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:25:04.958 10:13:49 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:25:04.958 10:13:49 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:04.958 10:13:49 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:04.958 10:13:49 -- common/autotest_common.sh@10 -- # set +x 00:25:10.225 INFO: APP EXITING 00:25:10.225 INFO: killing all VMs 00:25:10.225 INFO: killing vhost app 00:25:10.225 INFO: EXIT DONE 00:25:12.128 Waiting for block devices as requested 00:25:12.128 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:12.387 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.387 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.387 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:12.387 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:12.678 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:12.678 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:12.678 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:12.937 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:12.937 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:12.937 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:12.937 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.196 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.196 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.196 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:13.483 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:13.483 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:16.771 Cleaning 00:25:16.771 Removing: /var/run/dpdk/spdk0/config 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:25:16.771 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:16.771 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:16.771 Removing: /var/run/dpdk/spdk1/config 00:25:16.771 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:16.771 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:16.771 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:25:16.772 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:16.772 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:16.772 Removing: /var/run/dpdk/spdk1/mp_socket 00:25:16.772 Removing: /var/run/dpdk/spdk2/config 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:25:16.772 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:16.772 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:16.772 Removing: /var/run/dpdk/spdk3/config 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:25:16.772 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:16.772 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:16.772 Removing: /var/run/dpdk/spdk4/config 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:25:16.772 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:16.772 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:16.772 Removing: /dev/shm/bdevperf_trace.pid2425102 00:25:16.772 Removing: /dev/shm/bdevperf_trace.pid2592174 00:25:16.772 Removing: /dev/shm/bdev_svc_trace.1 00:25:16.772 Removing: /dev/shm/nvmf_trace.0 00:25:16.772 Removing: /dev/shm/spdk_tgt_trace.pid2382440 00:25:16.772 Removing: /var/run/dpdk/spdk0 00:25:16.772 Removing: /var/run/dpdk/spdk1 00:25:16.772 Removing: /var/run/dpdk/spdk2 00:25:16.772 Removing: /var/run/dpdk/spdk3 00:25:16.772 Removing: /var/run/dpdk/spdk4 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2379682 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2381138 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2382440 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2383079 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2384025 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2384261 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2385238 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2385446 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2385597 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2390327 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2391601 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2391893 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2392282 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2392692 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2392976 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2393226 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2393480 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2393757 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2394504 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2397493 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2397762 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2398033 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2398256 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2398744 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2398758 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2399250 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2399476 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2399738 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2399848 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2400019 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2400245 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2400704 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2400924 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2401236 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2405034 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2409048 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2419054 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2419977 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2425102 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2425459 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2429464 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2435126 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2437731 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2447572 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2471155 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2474748 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2515800 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2520810 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2526260 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2534805 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2590024 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2591090 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2592174 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2596104 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2602944 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2603858 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2604777 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2605819 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2606066 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2610760 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2610849 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2615160 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2615630 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2616264 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2616989 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2617004 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2621478 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2622052 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2626153 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2628910 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2634347 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2643984 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2644016 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2662805 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2663042 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2668860 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2669153 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2671051 00:25:16.772 Removing: /var/run/dpdk/spdk_pid2673929 00:25:16.772 Clean 00:25:16.772 10:14:01 -- common/autotest_common.sh@1451 -- # return 0 00:25:16.772 10:14:01 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:25:16.772 10:14:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.772 10:14:01 -- common/autotest_common.sh@10 -- # set +x 00:25:16.772 10:14:01 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:25:16.772 10:14:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:16.772 10:14:01 -- common/autotest_common.sh@10 -- # set +x 00:25:17.031 10:14:01 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:17.031 10:14:01 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:25:17.031 10:14:01 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:25:17.031 10:14:01 -- spdk/autotest.sh@395 -- # hash lcov 00:25:17.031 10:14:01 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:25:17.031 10:14:01 -- spdk/autotest.sh@397 -- # hostname 00:25:17.031 10:14:01 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:25:17.031 geninfo: WARNING: invalid characters removed from testname! 00:25:38.959 10:14:20 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:38.959 10:14:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:39.526 10:14:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:41.430 10:14:26 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:42.808 10:14:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:44.712 10:14:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:25:46.088 10:14:31 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:25:46.346 10:14:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:46.346 10:14:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:25:46.346 10:14:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:46.347 10:14:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:46.347 10:14:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.347 10:14:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.347 10:14:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.347 10:14:31 -- paths/export.sh@5 -- $ export PATH 00:25:46.347 10:14:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:46.347 10:14:31 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:25:46.347 10:14:31 -- common/autobuild_common.sh@447 -- $ date +%s 00:25:46.347 10:14:31 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721895271.XXXXXX 00:25:46.347 10:14:31 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721895271.hZLz5a 00:25:46.347 10:14:31 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:25:46.347 10:14:31 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:25:46.347 10:14:31 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/' 00:25:46.347 10:14:31 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:25:46.347 10:14:31 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:25:46.347 10:14:31 -- common/autobuild_common.sh@463 -- $ get_config_params 00:25:46.347 10:14:31 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:25:46.347 10:14:31 -- common/autotest_common.sh@10 -- $ set +x 00:25:46.347 10:14:31 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:25:46.347 10:14:31 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:25:46.347 10:14:31 -- pm/common@17 -- $ local monitor 00:25:46.347 10:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.347 10:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.347 10:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.347 10:14:31 -- pm/common@21 -- $ date +%s 00:25:46.347 10:14:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:46.347 10:14:31 -- pm/common@21 -- $ date +%s 00:25:46.347 10:14:31 -- pm/common@25 -- $ sleep 1 00:25:46.347 10:14:31 -- pm/common@21 -- $ date +%s 00:25:46.347 10:14:31 -- pm/common@21 -- $ date +%s 00:25:46.347 10:14:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895271 00:25:46.347 10:14:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895271 00:25:46.347 10:14:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895271 00:25:46.347 10:14:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895271 00:25:46.347 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895271_collect-vmstat.pm.log 00:25:46.347 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895271_collect-cpu-load.pm.log 00:25:46.347 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895271_collect-cpu-temp.pm.log 00:25:46.347 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895271_collect-bmc-pm.bmc.pm.log 00:25:47.284 10:14:32 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:25:47.284 10:14:32 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:25:47.284 10:14:32 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:47.284 10:14:32 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:25:47.284 10:14:32 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:25:47.284 10:14:32 -- spdk/autopackage.sh@19 -- $ timing_finish 00:25:47.284 10:14:32 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:25:47.284 10:14:32 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:25:47.284 10:14:32 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:25:47.284 10:14:32 -- spdk/autopackage.sh@20 -- $ exit 0 00:25:47.284 10:14:32 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:25:47.284 10:14:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:25:47.284 10:14:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:25:47.284 10:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:47.284 10:14:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:25:47.284 10:14:32 -- pm/common@44 -- $ pid=2688498 00:25:47.284 10:14:32 -- pm/common@50 -- $ kill -TERM 2688498 00:25:47.284 10:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:47.284 10:14:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:25:47.284 10:14:32 -- pm/common@44 -- $ pid=2688499 00:25:47.284 10:14:32 -- pm/common@50 -- $ kill -TERM 2688499 00:25:47.284 10:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:47.284 10:14:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:25:47.284 10:14:32 -- pm/common@44 -- $ pid=2688501 00:25:47.284 10:14:32 -- pm/common@50 -- $ kill -TERM 2688501 00:25:47.284 10:14:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:25:47.284 10:14:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:25:47.284 10:14:32 -- pm/common@44 -- $ pid=2688526 00:25:47.284 10:14:32 -- pm/common@50 -- $ sudo -E kill -TERM 2688526 00:25:47.284 + [[ -n 2275503 ]] 00:25:47.284 + sudo kill 2275503 00:25:47.295 [Pipeline] } 00:25:47.314 [Pipeline] // stage 00:25:47.320 [Pipeline] } 00:25:47.339 [Pipeline] // timeout 00:25:47.345 [Pipeline] } 00:25:47.364 [Pipeline] // catchError 00:25:47.370 [Pipeline] } 00:25:47.388 [Pipeline] // wrap 00:25:47.395 [Pipeline] } 00:25:47.410 [Pipeline] // catchError 00:25:47.420 [Pipeline] stage 00:25:47.423 [Pipeline] { (Epilogue) 00:25:47.438 [Pipeline] catchError 00:25:47.440 [Pipeline] { 00:25:47.455 [Pipeline] echo 00:25:47.458 Cleanup processes 00:25:47.464 [Pipeline] sh 00:25:47.802 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:47.802 2688626 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/sdr.cache 00:25:47.802 2688898 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:47.819 [Pipeline] sh 00:25:48.104 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:25:48.104 ++ grep -v 'sudo pgrep' 00:25:48.104 ++ awk '{print $1}' 00:25:48.104 + sudo kill -9 2688626 00:25:48.116 [Pipeline] sh 00:25:48.397 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:25:56.525 [Pipeline] sh 00:25:56.802 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:25:56.802 Artifacts sizes are good 00:25:56.820 [Pipeline] archiveArtifacts 00:25:56.828 Archiving artifacts 00:25:56.977 [Pipeline] sh 00:25:57.260 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-phy-autotest 00:25:57.273 [Pipeline] cleanWs 00:25:57.279 [WS-CLEANUP] Deleting project workspace... 00:25:57.279 [WS-CLEANUP] Deferred wipeout is used... 00:25:57.285 [WS-CLEANUP] done 00:25:57.286 [Pipeline] } 00:25:57.303 [Pipeline] // catchError 00:25:57.313 [Pipeline] sh 00:25:57.592 + logger -p user.info -t JENKINS-CI 00:25:57.601 [Pipeline] } 00:25:57.616 [Pipeline] // stage 00:25:57.621 [Pipeline] } 00:25:57.636 [Pipeline] // node 00:25:57.642 [Pipeline] End of Pipeline 00:25:57.677 Finished: SUCCESS